The Florida Legal Aid Resource Federation

Final Report for 2017-18 Pilot Project

OpenReferral_Logo_Green.png

Technology Innovation Grant Final Report:

Piloting the Florida Legal Aid Resource Federation


Table of Contents

Executive Overview:        4

What we did: prototyped a directory information-sharing system        5

How we designed and implemented this project:        6

What we learned:        9

Our standardized form has improved providers’ information-sharing capabilities        9

In this pilot, we pivoted to a peer-based ad hoc approach to verification.        10

Further research, development, and deliberation is needed re: verification        12

Challenges of matching people to services remain unresolved        13

Finally, we learned that developing interoperable tools ‘in the open’ is possible – 
but it requires facilitative resources, commitments, and accountability
        16

Recommendations for the future:        17

A range of technical objectives for future development in our ‘backlog’        17

Further research and deliberation re: information-sharing and verification        18

Integrations: valuable uses for the Florida legal aid resource directory data        19

Additions: other relevant services beyond LSC / FBF grantees        19

Extending and refining the Open Referral standard:        20

Investing in open source development        21

Appendix A: Technical Deliverables        22

Appendix B: Key links        23


Executive Overview:

The Florida Legal Aid Resource Federation was formed to make it easier to share information about the civil legal resources available to people in need — so that this information can be found and used in whichever way is most appropriate and effective.

Before our pilot, information about Florida’s legal aid programs was scattered among multiple redundant sources — from websites to spreadsheets to Word documents – each updated ad hoc, often by volunteers or interns. No canonical, up-to-date source of this information was available, even within providers’ own case management systems. This posed a challenge for providers, who were expected to update their information over and over again in various places; and clients, who struggled to find information about services; and innovators, who need this information to build useful tools.

The FLARF pilot was designed to solve this problem by building resource directory infrastructure through which providers could update their information just once — and have the resulting data be automatically shareable across any number of information channels.

Our pilot was successful, yielding a system of forms and lists that effectively circulate standardized data about each provider among their peers — and, potentially, with third-party systems. Any number of other initiatives can now leverage this resource directory data infrastructure.

This report details our results, methodology,  lessons learned, and proposals for the path ahead.

Many people helped make this project possible. Community Legal Services of Mid-Florida anchored the project, under the leadership of Chief Executive Officer Kim Sanchez. Greg Bloom, of Open Referral, directed the project and authored this report. We received support from the Legal Services Corporation’s Technology Innovation Grant and the Florida Bar Foundation. Our technical partners were Benetech and LegalServer. Thanks to Josh Lazar and Eli Mattern at CLSMF, Alan Hill at Three Rivers Legal Services, Corrine Lincoln at Coast to Coast Legal Aid,  Mary Haberland and Mark Gaul at Bay Area Legal Services, and Ilenia Sanchez-Bryson at Legal Services of Greater Miami. We also thank Lea Remigio at Florida Legal Services, Stephen Caines and the Access to Justice Fellowship.

For questions about Open Referral, reach out to info@openreferral.org. For questions about the Florida Legal Aid Resource Federation, reach out to legaldirectory@legalservicesmiami.org.


What we did: prototyped a directory information-sharing system

Forms through which legal services providers can update their own information.

Applying the Open Referral data model, and augmenting it through our user research to reflect the specific needs of the legal aid domain, we worked with Benetech to prototype and iterate a form through which organizations can publish information about their services (including what services they provide, where they are accessed, and how to access them) to a machine-readable format.  

LegalServer built its own version of this form, which we subsequently iterated through rounds of user testing across the state. [See user documentation here.] We now have a prototyped open source version of this form, which can be adapted and replicated for use in other contexts. [See Appendix A.]

A verification tool prototype, through which updates can be vetted.

Benetech prototyped a verification tool that can receive data from a compliant form, present records to a designated verifier, and either request additional information, or republish verified information.

Our test with this tool was successful in that data was able to flow from both forms and back to LegalServer. However, our test also revealed challenges for this modular, interoperable approach to verification. More research and development is needed to establish a formal system of verification. [See below for more analysis.]

Aggregated data from (almost) all of Florida’s legal services providers.

Having conducted a successful statewide test, we can now extract comprehensive data about every LSC- or FBF-funded legal services provider in Florida. A sample set of this data is available here.

Basic filtering and querying functionality in LegalServer.

Implemented in its new ‘Referral Hub’ feature set, LegalServer has developed a basic ability to filter relevant legal resources based upon a limited set of criteria from an intake process, including zip code, problem code, and core eligibility criteria.

Recommendations for the future.

During the pilot’s final phase, we evaluated our output, synthesized lessons learned, and sketched out possible paths forward. This report concludes with recommendations to the Florida Bar Foundation, the Legal Services Corporation, and the Open Referral Initiative. In 2019, we will engage with the national Open Referral Legal Aid workgroup to consider proposing a ‘legal aid extension’ to Open Referral’s Human Services Data Specification. We also urge continued research, deliberation, and development of monitoring methods to ensure sustained accuracy of FLARF’s directory data.


How we designed and implemented this project:

The FLARF pilot conducted a kind of ‘participatory action research’: a representative user group engaged in reflection, question articulation, research, design, and evaluation. These phases informed each respective phase of technology development and are described in detail below.

Establishing a user group and project charter.

Upon initiation of the project, through a series of MOUs, we formed a ‘lead stakeholder’ user group. Participating stakeholders received work stipends for their time. Through dialogue with all team members, we drafted a project charter which outlined goals, roles, and other key project parameters.

We convened our user group in May 2017, in Tampa, Florida, where we approved the project charter, revised our user personas, solicited feedback on early prototypes, developed a shared mental model of our desired system, and articulated various criteria for success and evaluation metrics.

Engaging in participatory research.

Through one-on-one engagement and group deliberation, we honed our User Personas to articulate the behaviors and needs of various types of stakeholders in the system – from end users, to referring providers, to direct service providers, to analysts and researchers. These personas gave us a common vocabulary and a shared mental model of how an intervention might change behaviors.

Through this process, we developed a backlog of ‘user stories.’ Users then ranked the value of these user stories, and our technical partners ranked each story’s level of technical difficulty. Our development agenda was set according to ‘user stories’ that were both high-value and achievable.

Conducting iterative development.

Benetech produced an early prototype for testing in May 2017, and a ‘beta’ version of their form and verification tool in September 2017. (Our activities were put on pause by more than a month due to Hurricane Irma. By October, we conducted our test and evaluation of the form.)

The evaluation yielded generally positive feedback, though revealed the need for changes that would require substantial structural adjustments to Benetech’s architecture. They released a quick second version (developed in a lightweight web-based survey tool) in November to get feedback on the value of such changes, which the user group tested and evaluated at the TIG Conference in January 2018.

LegalServer released the first version of its form for testing by all of its Florida users in April, and we evaluated the results of this test at a user group convening in May. A fourth test was initiated in July, and a final test with all statewide users was deployed in October.

Developing an Evaluation Framework

One of the first objectives for FLARF upon formation was the articulation of a ‘Success Spectrum.’ This provided a structured perspective, objective by objective, of what success might look like at:

1) a minimal threshold (a set of things that MUST happen for the project to be considered a success),

2) our target objectives (of which we aim to accomplish roughly one- to two-thirds)  

3) an epic triumph (describing our wildest hopes and dreams for the future)

4) failure (including a list of things that must not happen if the project is to be considered a success)

Once we developed this matrix, we extended it to include our success criteria, metrics, and method of measurement for each objective.

The Success Spectrum is summarized below, with color-coded cells indicating our results.

Deliverables

Fail

Minimum success

Target success

Epic success

A self-publishing form in LegalServer, and standalone

Data fields not relevant,

cumbersome,

ssers don't use

(one-and-done)

Form collects minimally valuable data; issues with bugs and usability are documented

Form provides the needed data with a user experience (length, guidance, etc) that stakeholders find acceptable

Form derives needed data from outside sources and automatically verifies

Verification tool

No working tool or tool does not integrate. We test the wrong things and do not learn enough of value to build the next version

Some structured data is presented and editable with learnings about what reliable verification will require

Reusable, clean code that enables presentation of data, editing of data, and basic feedback loop protocol

All available data, including complex unstructured data, is presentable and editable through the verification tool, with strong feedback loops that enable rapid verification

Search / query functionality in LegalServer

Search returns too many results with no indication of relevance, or no results + no suggestions.

False negatives.

Unnavigable.

Inaccurate results.

External tool overwrites local data / removes users’ autonomy to modify, accept, etc.

Reference only - little context, no indication of availability or quality

Geographically relevant and granular data

Augmentable with localized client notes

Directory records include basic eligibility info and timing

New updates offered - pre-verification via crowd/users

Notifications of new grants / services, intake methods

Automating referrals, Auto follow-up with Time Tracking

1:1 referral counts

Documented intake

DATA: Aggregated, verified, and available via API

Errors in data.

Lack of clarity.

Out of date.

Otherwise untrustworthy

Aggregated data is queryable in LegalServer

Aggregated data is reliable, queryable by third party systems like intake and triage portals, LawHelp

Aggregated data comprehensive, and exchangeable for resource data on social services from 2-1-1 providers, etc.

FINAL REPORT: Feasibility study + recommendations for ongoing maintenance

Report unwritten, inaccurate, or just plain unreadable.

No clear pathways forward.

Report summarizes our accomplishments and our lessons learned.

Report shares lessons learned, including a set of recommendations for post-pilot sustainability, next steps for features and scaling technology

Report charts path to non-LSC legal resources

Final report includes a detailed and ready-to-implement
business plan, along with budget including already-secured funding.

Additional partners identified; growth and scaling plan defined. Technology well documented for additional development and adoption

Open source code and documentation

Code not redeployable.

Code undocumented

Code available though not well-documented or easily reusable

Code documented with instructions for repackaging

Code evolves into redeployable software that becomes an industry standard

Proposed 'civil legal aid extension' for Open Referral format

No consensus.

No proposal

Feedback shared with Open Referral initiative

Proposal becomes part of emergent standard

Catalyzes cascade of innovative tools and applications

Project Overall

Don't meet deliverables.

Form not used.

Users don't trust system/data.

Users don't believe in future success.

Dies after pilot

Deliverables met.

Can assess whether project addresses pain.

Hypothesis not validated, but we learned

System in use

Consensus about post-pilot path

Project catalyzes legal field

Project catalyzes social service / 211 partnerships


What we learned:

As a pilot project, the most important deliverable for FLARF’s first phase is a set of lessons learned about this complex issue (and accompanying recommendations for the future —  in the next section). Our work yielded a range of valuable insights — from the validations of our assumptions about the value of information sharing, to revelation of new dimensions in the challenge of matching people to services, to insights about the capacities needed for successful implementation of projects like this.

Our standardized form has improved providers’ information-sharing capabilities.

Given the parameters established in our evaluation framework, we collected a baseline set of data about how providers are currently sharing and using directory information.

First, we interviewed Lea Remigio, a stakeholder with previous experience managing resource directory information on Florida’s legal aid providers in LegalServer. In 2010, Remigio had been responsible for collecting resource directory information and manually populating it into each respective provider’s LegalServer instance. All together, this took Remigio about 60 hours of time.

We then sent out an initial survey to each provider to establish a baseline of information about providers’ current behaviors. [Results are viewable here.]

We discovered, for instance, that most providers update their own resource directory information in three to five different information sources, at least three times a year, and that each update takes about an hour. Yet providers also reported that they are unable to serve an average of 45% of incoming referrals, and that more than 80% of these instances are caused by a referral for issues that are outside of the providers’ stated priorities. This indicates that providers are not only spending redundant time updating their own information, but also still coping with high levels of incorrect inbound referrals. Furthermore, more than half of the providers indicated that they were not using the Organizations Module in LegalServer to facilitate information management and referrals.

Some evaluation criteria — such as measuring the number of requests for edits of records in the resource directory — changed with our mid-project pivot away from a structured information verification system.  Since this pivot also accompanied some developmental delays, we were not able to methodically measure usage of resource data for referrals, or changes in the rate of false referrals. (However, our evaluation surveys did return some unstructured positive feedback from providers who are already starting to use this information to make referrals).

On key output metrics related to the provider information form, our evaluation indicated considerable success. User group feedback on the ease-of-use of the form steadily improved with each test, from 3.5 (out of 5) in the first test, to 3.8 in the second test, to 4.45 in the final statewide test. All but one provider participated in the statewide onboarding process; though only half of providers answered the feedback survey, the results were uniformly positive. Asked whether this method of sharing information represented an improvement over their current process, providers’ answers averaged 4.36 out of 5.

Providers indicated that it takes them between 30 minutes and 2 hours to fill out the new form. This may seem like a minor increase in time compared with previous methods of information sharing, yet overall represents a net savings of several hours per provider per quarter, given this ‘canonical’ (and reusable) entry should become the only necessary source of information moving forward.

There is currently no external data management required for this process, saving approximately 60 hours of staff time per statewide update, compared with the process conducted by Lea Remigio in 2010. 

As LegalServer’s new ‘Referral Hub’ becomes fully operational, and as this information becomes integrated with other systems such as intake and triage workflows, we urge continued investment in measurement and evaluation of the effectiveness of this method for information sharing.

In this pilot, we pivoted to a peer-based ad hoc approach to verification.

More research is needed to assess the viability of monitoring and verification methods.

For legal aid resource directory information to be trustworthy, it should be verified.

A key objective of the Florida Legal Aid Resource Federation is to produce trustworthy information. This is more challenging than it may initially seem.

The Legal Services Corporation contractually requires all grantees to provide up-to-date information about their services to ‘a statewide website.’ However, it is not clear how much information is required to comply with this requirement. Furthermore, evidence from the field suggests that this responsibility is rarely assigned to a particular staffer. The resulting supply of information is often quite limited.

Service providers report an ongoing struggle to get accurate information from their peer providers, even within their own organization. (For more, see our ‘Service Provider’ stakeholder profile.) 

Given the complexity and flux of service details, along with capacity challenges, plus the inevitability of human error, we learned that the reliability of legal services directory information is dependent upon feedback loops that facilitate ongoing monitoring, verification, and correction — even with contractual requirements in place.

In other words, the Florida Legal Aid Resource Federation believes we should trust organizations to update their own information, yet we must also verify they have done so completely and accurately.

For example, in our statewide onboarding process, one organization entered only one service entitled “Immigration” to represent a range of programs that span from human trafficking to deportation defense. Another organization entered contact information for a staffer who has in fact not been with the organization for quite some time. Yet another organization refused to participate entirely, even though we assume they are contractually required to do so.

Technology cannot itself solve these issues of accuracy or even compliance, but it can be used to structure and reinforce a set of protocols and procedures with which Florida’s legal aid community ensures the ongoing accuracy of this information.

Our attempt to develop an independent verification tool yielded inconclusive results.

At the start of our pilot, we hypothesized that a specific user could be designated to review each submitted legal aid resource record, and that this verification loop could ensure accuracy of resource directory information at a reasonable amount of time. (We outlined the characteristics of this type of user in this ‘data admin’ user persona.) We articulated functionality for a verification tool that would enable a designated vettor to see all record updates, check the contents of each record, edit the contents, and send a request for updates back to the organization.

Benetech presented a prototype for this tool at our user convening in May of 2017, and users provided feedback. Benetech iterated on the prototype, adding features such as email notification for the validator and additional structural supports for several fields. Our test of this updated verification prototype release in September was a success: the designated user was able to see, review, and confirm each record submitted by Benetech’s form; the tool could expose those verified records back to a third-party system (LegalServer, in this case).

Integration with LegalServer was minimally successful: Benetech’s tool was able to receive data from LegalServer, and pass data back to LegalServer. LegalServer developed this functionality in a secondary module that was able to receive data from the verification tool. This technically met the minimally-necessary threshold specified by our project deliverable; however, it did not meet our target success criteria for a functional feedback loop between LegalServer users and the verifiable directory information. The redundant and isolated feature set was hard for users to find and its function unclear.  

In retrospectively discussing this design decision, LegalServer posed several questions — for example, how the fields could stay in sync between the two systems over time? This question about the cost of ongoing maintenance may be valid, and also answerable; however, the question arose late in the project timeline, after the expected process of iterative testing and evaluation cycles had gone unfulfilled. Without such an agile process, we were not able to estimate the necessary amount of this labor, which makes it difficult to assess possible answers.

As a result, we did not continue development on these components beyond their initial minimally-successful cycle. [See our notes from the retrospective evaluation.]  

Further research, development, and deliberation is needed re: verification.

Instead of a structured verification system, FLARF has provided instructions for legal aid providers to contact each other to request ad hoc updates and/or clarifications. This method of ad hoc, peer-based monitoring and clarification should suffice at a small scale for the near future; however, at larger scales and/or longer timeframes, we suspect that legal aid networks will need a more deliberate method of monitoring, feedback, and verification.

Below are some options that we believe are worth exploring in future initiatives:

Resume development of an external verification tool

One of our key initial hypotheses posited that a designated vettor, using a dedicated modular tool, can effectively receive and edit directory data from — and submit data back to — LegalServer, among other sources of such information. Though our project overall was a success, this hypothesis remains untested. A future project could revisit this hypothesis by developing and testing modular tools to enable publishing and editing/verification in ways that are interoperable and sustainable.

Our ‘user stories’ backlog includes a range of functionality articulated in our participatory research that could be addressed through future development — such as the ability for both the verifier AND referral providers to send various kinds of notifications to record owners (from alerts within LegalServer, to email messages); visualization of edits (‘diffs’); version control; the ability for third-party users to request or suggest edits; etc.

Experiment with an external ‘exception notification’ system

In our final user group convening, Benetech also suggested the possibility of a different framework for feedback loops: an ‘exception notification system.’

Rather than attempting the technically challenging task of editing and synchronizing information across multiple systems, an exception-notification system would have a simpler objective: simply flag and alert users when a record has been found to have conflicting or otherwise exceptional information. For example, if a legal aid provider attempts to refer a client to a service that appears in their directory, only to find that key information is incorrect, the provider could flag the record, which could trigger some kind of event or reputation rating.

Another form of ‘exception notification’ could be between records: for example, if a legal aid resource directory has a record that conflicts with the same record in a 2-1-1 resource directory, the exception-notification system can alert all involved parties that the conflict exists.

Internal reputation system

LegalServer suggested another approach that would involve an internal reputation rating system among legal services provider peers. Records shared within LegalServer could be flagged (or star-rated, or thumbs up/down, etc.) by peers when found to be incorrect and incomplete.

This possibility should be carefully explored, as we could imagine unintended consequences from unanticipated user behavior.

Perhaps more importantly, this solution only addresses LegalServer users — who, even in Florida, don’t represent the entirety of the civil legal services field. It is not clear how this solution would enable the collection, monitoring, and circulation of information about non-LegalServer users. (LegalServer suggests the possibility of creating external interfaces to enable non-LegalServer users to submit information to LegalServer’s ‘centralized’ directory system, but it is not clear what incentives such organizations would have to participate.) Furthermore, an internal, proprietary solution would deepen users’ dependence upon a single vendor, which may restrict their agency over time.

Distributed ledgers

One other hypothetical solution worth considering is a method of distributed registers. A register holds uniquely-identified canonical data, and enables distributed systems to access that data. This project has effectively created an internal register for legal aid services within LegalServer. This is useful in a case like Florida in which almost all legal services providers use LegalServer. The next step, which could perhaps be initiated by funders, would be to develop technology that supports distributed, interoperable registers. See the Open Data Institute’s report on registers here. 

It is worth noting that these options are not necessarily mutually exclusive. More research, development, and deliberation will be needed to establish answers to questions about the long-term viability of any such system of feedback and verification.

Challenges of matching people to services remain unresolved

Legal services agencies often have restrictions on the types of work they can do and people they can serve. These restrictions may be a function of organizational mission and structure (e.g., providers who only practice family law, or who only assist veterans, or immigrants, etc). Restrictions on types of activities and/or people may also be placed by funders, as a condition of funding (e.g. programs can only serve single mothers at a certain income level). As such, it is often a challenge to ascertain not just which legal services exist, but which services are able to be accessed by any given person.

During the course of our user research, we found that our stakeholders very highly value the potential to match clients directly with services for which they quality based on their personal information.

However, to effectively determine whether a person is eligible for a given service that they might need, at least three pieces of information are necessary:

  1. The jurisdiction in which a person lives (in this case: county)
  2. The type of legal problem that they have
  3. A person’s identity (what type of person are they, in what type of situation?)

Each kind of information poses different kinds of challenges, which we discuss below.

The county code problem

Counties are a key criteria in establishing a resident’s eligibility for services. However, users might spell county names in different ways, and there might even be competing claims to counties’ geographical boundaries.

Fortunately, the Federal Information Processing Standards (FIPS) provides a long-standing list of official county codes. Officially, the FIPS county codes have been deprecated; however, they have not been replaced, and still appear to be in use. We recommend using the FIPS codelist as a standardized set of counties, unless and until an official replacement is established.

The legal problem code problem

Another key criteria for matching people’s needs to the right service is identifying the particular kind of legal problem that they have. The National Subject Matter Index (NSMI) offers an established set of ‘problem codes’ that is accepted as a standard in the civil legal services field.

We support the use of NSMI, but we also recognize a host of challenges that are associated with it. From our notes, those challenges include: "expansions; exceptions; priorities; real vs ideal vs expired problems; 'special’ legal problem codes developed locally; problem code errors; problem code definition disagreements; and sub-problems." 

Furthermore, this taxonomic challenge overlaps with the domain of the 2-1-1 Taxonomy, which contains a range of types of legal services, and which could potentially be ‘crosswalked’ (i.e. matched) with the NSMI codes.

Our project did not attempt to solve the legal problem code problem. Rather, we recommend continued investment in research and development for new methods that can help providers, funders, and vendors cope with the challenges of service classification and vocabulary alignment.

The eligibility criteria problem

This may be the hardest problem we encountered. Types of people can be extremely granular (i.e. not just veterans, but veterans of specific wars), subjective (i.e. income, or housing status), and sensitive.

FLARF did not attempt to "standardize" all types of eligibility criteria. Given the vast, granular array of eligibility criteria observed even within our user group cohort, we should expect that an effort to establish the standardized set of all possible eligibility criteria would be a massive undertaking with high chances of failure. Furthermore, such an effort might carry significant potential to cause harm.

Any attempt to ‘standardize’ a formal set of all possible types of people might unintentionally exclude any number of unanticipated types of people — who would therefore not be recognized by the technologies and institutions that use the standardized categories.

That said, our user group identified a set of specific criteria that cover 'most' of their use cases (roughly estimated at 80%). Our technical partners included these options in their forms, though the set is not intended to be fixed or exhaustive.

Building on this experience, we have three recommendations for future development:

  1. Establish a standardized machine-readable syntax for eligibility rule logic. (NOT describing all possible criteria, but rather a standardized way for this conditional logic to be expressed)
  2. Promote the development of tools that offer users a core set of ‘common eligibility criteria’ that also accommodate an open set of exceptions
  3. Explore (and, potentially, advocate for) possible policy interventions that address this problem at its source;  in the grantmaking process.

For more about how to address these issues moving forward, see our subsection on Eligibility Criteria in the Recommendations section below.

In the meantime, LegalServer has internally addressed this eligibility criteria challenge by designating a set menu of recognized criteria. This internal taxonomy now comprises all structured options that are available for providers to describe eligibility for their services and/or filter for referrals.

Providers can also indicate more granular and/or exceptional eligibility criteria in an unstructured notes field. Furthermore, LegalServer’s Referral Hub includes a feature that displays ‘close matches,’ thus ensuring results include services that might be relevant even if not a complete ‘match.’

These features seem to meet our recommendation (#2 above) for eligibility information systems to accommodate an open set of exceptions. Still, we encourage future evaluation of this arrangement.

Finally, we learned that developing interoperable tools ‘in the open’ is possible – 
but it requires facilitative resources, commitments, and accountability

One of our clearest lessons learned is that it is possible to develop open source, interoperable technologies — yet it takes both a sufficient amount of resources for coordination, and a commitment to accountable development processes. We hope this insight can inform future efforts.

Our ambitious objective was to establish interoperable tools that could be redeployed regardless of a community’s technological landscape. Our plan entailed simultaneous development with two technical partners — one building within a proprietary system, another building open source modules.

Developing software with multiple partners in tandem turned out to require more coordination capacity for development operations — and stronger accountability processes — than we anticipated.

Miscommunication and delays are to be expected in any project — yet without dedicated oversight of joint development operations, such challenges compounded each other. In at least one case (the verification tool), our technical partner made a unilateral decision that necessitated a pivot from one of our project’s target objectives.

Our project was successful on all counts within our user group’s primary, proprietary technology environment; however, the resulting system may be less immediately replicable than we had initially intended for more heterogeneous and/or non-proprietary technological landscapes.

We recommend, in future projects, that all parties affirm their commitment to the process of open, interoperable, accountable development — including collaborative processes for process design, decision-making, communication, delivery, and evaluation. Furthermore, projects should allocate sufficient resources to ensure that all parties can uphold those commitments.

This may entail more investment in project management and development operations.

In our project, we also made the mistake of conflating the distinct roles of product manager and project manager within the same position. We recommend that these roles be separately designated and budgeted. Furthermore, the product manager — representing the users’ interests — should have some form of agency in the development of technical partners’ roadmaps, and the project manager should have some measure of oversight into technical partners’ work-plans.

Finally, we should highlight one successful step which should be considered a best practice: we not only formed a stipended user group, but specially designated — and additionally stipended — one member to be group coordinator. This enabled a peer (Ilenia Sanchez-Bryson of Legal Services of Greater Miami) to step into a leadership role, coordinating user activities and communication in ways in which an outside consultant might have struggled.


Recommendations for the future:

One of the most valuable deliverables from this project is this proposed agenda for future development, as generated by our research, testing, and deliberation. This agenda includes:

A range of technical objectives for future development in our ‘backlog.’

We identified a range of potentially valuable features which we did not have the capacity or time to develop, but deserve future consideration. This ‘backlog’ for future development is compiled on our Trello board, and a more granular version is in our ‘user story index.’ Prominent items include:

In LegalServer, improve capacities for ownership, permissions, and feedback around records:

For sharing information publicly:

For form development:

Further research and deliberation re: information sharing and verification.

We recommend a task force, to be appointed by the Florida Bar Foundation – and, ideally, resourced for further development – to 1) monitor ongoing accuracy of the Legal Referral Hub, and 2) to consider possible policy changes, operational protocols, and additional tooling that can ensure such accuracy.

When it comes to possible policy changes, we can identify at least one example of a set of rules from within our own user group. Before FLARF, CLSMF had an internal protocol for soliciting information about and from its programs (which are distributed across a range of locations and departments):

Rules like these could potentially serve to incentivize the sharing of accurate information statewide. However, they would not address patterns of under-specification of program information.

Toward that end, a task force should consider the following questions:

Integrations: Valuable uses for the Florida legal aid resource directory data.

Now that there is an established method of producing standardized resource directory data, the FLARF infrastructure can benefit a range of additional objectives in the field:

Additions: other relevant services beyond LSC / FBF grantees.

Though the 28 organizations funded by the FBF and/or LSC represent the large majority of legal aid resources available to Florida residents, there are still several other kinds of resources that fall outside of this set — and therefore may require other methods of resource data collection.

Extending and refining the Open Referral standard:

Overall, the Open Referral data model (the Human Services Data Specification) effectively fit our expectation regarding the key types of information that a variety of users may need to know about services. We identified a few examples of information types that are not fully articulated in HSDS, and which may or may not be specific to legal aid. We offer these instances for consideration as either legal-specific extensions, or perhaps candidates for inclusion in future versions of HSDS itself.

Handling schedule for ‘events’ like clinics.

Open Referral’s format allows an hourly schedule for each day of the week, but does not articulate formats for ‘event-based’ services that might only be available for a limited time, or on a basis less frequently than every week. We submitted an issue for Open Referral to consider accounting for these instances in future versions. 

Access Methods’ to reflect different processes for applying to a service.

In this pilot, LegalServer implemented a field known as ‘Access Methods’ to indicate the various ways that users could access a service at a given location (whether physical or virtual, web or phone based, etc). This concept is synonymous with the term ‘application_process’ as articulated in HSDS. However, it offers additional structure for a more complex description of ‘ways in.’ This should be considered for inclusion in future iterations of HSDS. In the meantime, this structured data can also be expressed in a simple text string, which means it can still be made compatible with the current version of HSDS.

Addressing the complexities of eligibility criteria.

As we described in the Lessons Learned section on eligibility criteria, this pilot set a narrow scope around information about services, and did not attempt to establish a fixed set of types of people (i.e. age, gender, income level, immigration status, etc).

We have three recommendations for ways to cope with the challenge of variable eligibility criteria:

  1. Open Referral (and/or some other relevant process) should establish a standardized machine-readable syntax for eligibility rule logic (one that gracefully handles exceptions).
  2. The legal services technology field should develop tools that recognize a core set of ‘common criteria,’ while ensuring that they can gracefully handle an open set of exceptions.
  3. Legal aid providers and funders should explore (and, potentially, advocate for) possible policy interventions that address this challenge at its source: in the grantmaking process.

Grantmakers can help address this problem by either streamlining the complexity of eligibility requirements required as a condition of funding, and/or by publishing these eligibility requirements as supplemental mechanical logic.

We also recommend that Open Referral consider adjusting its Human Services Data Specification to allow a 'many-to-many' relationship between services, their eligibility criteria, and the sites at which they are offered. (Right now, eligibility criteria are property of services; however, we found that some services’ criteria vary between locations. This suggests that eligibility criteria may belong in HSDS’s ‘service_at_location’ table.)

Articulating the elements of feedback loops:

Flag to indicate inaccurate information.

Should the specification describe “feedback” from users regarding the quality (accuracy, completeness, etc) of resource data? For example, if a user finds that a phone number has been disconnected, they may want to flag the phone field and even include a note with context.

Should such an information element include a codelist of statuses indicating the state of the issue (disputed, resolved, etc.)? Should there also be a log to record the history of these statuses?

Contact field specifically to indicate who is responsible for accuracy of a record.

Consider some concept of a contact responsible for the accuracy of the record. This might be different from any of the existing contacts listed elsewhere in the record (and may be a third party or a set of contacts/mailing list). Implementers might use this information to send alerts based on reports in the system.

Investing in open source development

For our recommendations for future open source development, we refer back to the final section of our ‘Lessons Learned.’ To recap, we recommend that future projects sufficiently invest in ‘articulation work’ for complex multi-stakeholder operations: facilitation, coordination, documentation, etc.

This includes, for example, specifically budgeting and delegating separate roles for both project management and product management. Product managers should have some form of agency in technical partners’ product roadmaps, and project managers should have oversight of workplans.

Such investments can ensure that the insights of all stakeholders are effectively leveraged to guide the course of the project, and that results are effectively packaged to maximize replicability.


Appendix A: Technical Deliverables

Open source directory information self-publishing form for service providers

Version 1:

Site: www.legalservices.site 

Example User: greg

Password: b3n3t3ch

Explainer Video: https://www.youtube.com/watch?v=7qwJxx8451I

Source Code: www.github.com/benetech/LegalServicesFlorida

Second ‘Alpha’ test:

Survey Gizmo Prototype Questionnaire: http://www.surveygizmo.com/s3/3957534/Florida-Legal

Version 2:

Site: https://johnhbenetech.pythonanywhere.com/

Example User: greg

Password: b3n3t3ch

Source Code: https://github.com/johnhbenetech/FloridaLegal2.0

Aggregated data about all FBF- and/or LSC-funded legal aid providers in Florida

Legal resource directory data has been aggregated and extracted from LegalServer’s Referral Hub — a ZIP file is available here.

Notes: 

LegalServer Referral Hub

Documentation of LegalServer Referral Hub form and features. (Update from LegalServer pending...)


Appendix B: Key links

Project charter

Operational Tables

Shared Drive Folder

Evaluation data

Sample Directory Data Set

Feedback from Benetech on Open Referral format

Documentation of LegalServer Referral Hub form and features