Trust and Internet Identity Meeting Europe
2013 - 2020: Workshops and Unconference

TIIME 2015 Session 11: Self-Assessment Tool Requirements Gathering

Convener: David Groep, Hannah Short

Abstract: Useful and feasible assurance can be promoted by transparency. Assessing compliance based on formal audits poses barriers for many organisations. Alternatively, transparent self-assessment, like peer review, can also give sufficient trust but in distributed systems it doesn't scale without automation. We propose to explore these topics and brainstorm about what an automated tool might look like.

Tags: Trust, Assessment, LoA

Notes

Self-assessment tool use cases:

  • LoA assessment for IDPs
  • Sirtfi compliance for IDPs and SPs
  • DP CoCo for EU/EEA
  • SP Assurance level ("inverse" of IDP LoA assessment)


Tool Requirements:

  • Responsibility for the tool should be at a federation level. This does not preclude running the tool centrally. This will also aid scalability
  • Tool should send assessment requests to organisations based on contact information in metadata
  • The tool should support multiple question types, yes/no and multiple choice
  • Machine readable responses (yes/no and multiple choice) should be supported by secondary evidence based free text
  • The tool should facilitate peer review; peer assignment should not be determined by the assessee
  • Results of assessments should be made available; individual assessee results would be private to the assessee but an aggregated view should be freely available
  • Fed Ops should have access to the results of the assessments
  • Access control for an assessment should facilitate private and public sharing
  • The tool should support re-assessment and have configurable behaviour in the event that the re-assessment is not done or if it fails


LoA self-assessment tools requirements

Use cases:

For the 'low-risk' use cases, self-assessment is enough, and a tool might be filled in by an IDP operator.

For SirTFi: in v1 it is just asserting compliance - the tool might automatically update the metadata UN the federation - centrally or per federation? Since it engages federation operations, responsibility and ownership suggests a per-federation approach. That will also help in scaling. Use delegation for scaling.

A central tool would bring economies of scale. Running that as a service can still delegate responsibility.

The tool will guide them through an assessment. IDP admin can login to the tool and then fill in the assessment, and then automatically compare. The contact element form the IDP meta-data can be used to generate a login link to that person - so you get the right person responsible for the IDPs/SPs. Using the SAML2 MD as the trusted place for the people who can login to the tool.

The login info can be a shared generic address, but that is still fine - the login will be based on SAML and so you can still see who can actually did log in.

Tool can also add peer review capability support.

It makes available a set of assertions available, and the peer review make them more evaluable. The peer review in the IGTF with known peers adds value to the raw data of the self-assessment. The peers are not entirely random, and an RP review is considered more 'valuable'. In the IGTF, it is expected that participants contribute a bit in kind, for reviews and for attendance.

But every review is value, as long as you can identify clique formation and 'eliminate' their results.

In SAML MD, you can identify SP and IDP reviews automatically.

Trust typically scaled to ~150 humans (Dunbar), and you need a distribution in there. So for a 4000 IDP system you might want subgroups ;-) Looking for clique formation is well known from OpSec trust groups, but also the old Thawte WoT notaries and for the PGP strong set.

For the tool: reporting interface, you can identify the intermediate parties, and these should be visible.

For the tool, visualise compare to avg. baseline maturity - compare to SURFnet maturity scan, and the web diagram (see Mikael's presentation). The publication might have to be delayed in order not to encourage false reporting. Compare to the average is fine, as long as that result in (for a while at least) private to the compared party. For 1-on-1 comparison, you need approval of both parties.

For the 'quality' of the assessment, use a scale - not present, present, documented, document-reviewed, compliance checked regularly (0..5)

Re-assessment should not be possible to just bypass the process.

Self-assessment on entry is fine, but you need persistence of old data.

If the self-assessment changes, who should react? At least the federation should not knowingly continue to rely on false data, so the federation may intervene if the maturity degrades/changes over time.

The proper point is probably the federation.

In the health sector in the US, evolution is modelled after reaction on incidents and then re-assessing.

It may be cultural - required to report vs. assigning blame instead of using it as opportunity for improvement. Or the risk of non-reporting of compliance is too great - e.g. risk of loosing entire ability to operate as a medical facility (for UChicago)

What is the gain for the IDP for using the self-assessment? Adding trust marks by the federation to the SAML MD, and SPs can start filtering on that. The marks get assigned when you meet minimum requirements on the specific trust area - and

People are usually honest, as long as there is transparency and there might be questions - the IGTF found that people are usually self-critical (both in IE and IT and NL). But in those cases there was no negative impact.

What would happen if you were disqualified if it's not all 5's? Would then everyone tick 5?

Maybe: if you claim its document, require a pointer to the document. You need a free text field as to how you came to that statement. Not the entire PDF, but just links, some of which may be to private pages.

InCommon participant operating practices has a few issues. One was the free format, so a structured format will help here. The SCI document has a structures set of questions. You need automation here.

Should the tool publish the answers? This has to be access controlled. The results may be shared, targeted at specific consumers. Share with federation operator, share with the e-Infrastructures so that they will then allow your IDP in (maybe they only do that if you agree to share the results).

Identifying the people with whom to share? Take from the SAML MD, but using the link-send-ID mechanism also used for IDPs now for SPs?

Also share with the federation operator.

Visualising? The federation may aggregate the results for each self-assessment and assign an overall label to it (bronze, silver, gold, or so). But not visualise the details.

Spider diagram categories - configuration issue for the tool.

Tool can serve multiple purposes, for the identity VoTs how many trust marks go in? How many aspects we need to cover is still open (now 6). Too many does not work either. Elements could come from SCI or the IGTF APs.

Tool and answers should be versioned.

Result of this should be a requirements document. Use cases of the tool:

  • LoA
  • SirTFI,
  • DPCoCo for EU/EEA
  • SP assurance     (interplay/duality - focus on privacy, meaningful error msg, incident     response notification by SPs to IDPs when there's something fishy with a     user).


Never ask for the level, ask for the community requirements.

Is there an existing tool? Qualtrics maybe?

For the Kantara requirements it's currently a PDF, but now changing to excel, later move this to an on-line tool.

Worst case the tool will just be a survey …

A page for this topic was created on the wiki of project AARC/Policy Harmonisation:
https://wiki.geant.org/display/AARC/AARC+Policy+Harmonisation