Authors:
Nischal Subedi
The affordable housing crisis in the U.S. is more than an economic problem: It is a crisis of information. Every year, more than a million eviction filings are initiated, a figure that represents not just legal actions but also profound moments of human distress [1]. Behind every filing is often a person navigating a confusing patchwork of state and local laws, frequently without legal aid. The Legal Services Corporation reports a vast "justice gap," where the civil legal needs of a majority of low-income Americans go unmet, leaving them to face complex challenges alone [2]. This information asymmetry, where one party holds knowledge as a form of power, creates a landscape ripe for misunderstanding, procedural errors, and inequitable outcomes. Can technology, specifically artificial intelligence, begin to level this playing field?
→ For high-stakes social issues, an AI's architecture must prioritize factual safety over conversational flair to prevent harmful misinformation.
→ Confronting and mitigating the inherent bias in readily available data sources is a fundamental design task, not just a technical one.
→ Designing an AI to transparently communicate its own limitations is a critical feature for building user trust and ensuring responsible use.
This question led me to develop the Landlord-Tenant Rights Bot, an AI-powered tool designed to provide both tenants and small landlords with accessible, state-specific guidance on housing laws. This article chronicles the journey of its creation. It is not a report on a novel algorithm—I used a well-established AI architecture to focus on the application's challenges [3]—but a transparent blueprint of the crucial and often difficult design choices I made at every step. It is a case study for practitioners on what it takes to move an AI tool from concept to a responsible, real-world implementation that addresses a complex human need. This work is not simply about building an application; it is about grappling with the ethical responsibilities that arise when our designs intervene in people's lives at their most vulnerable moments. My ultimate goal is for this AI tool to empower tenants and landlords with accessible legal knowledge, fostering the confidence needed to navigate housing challenges and promote community stability.
The Challenge: An Asymmetric Information Landscape
The landscape of housing law is notoriously fragmented. Rules governing evictions, security deposits, and repairs vary not just from state to state, but often from city to city. For a tenant under the immense stress of a potential eviction, trying to find a clear, applicable answer online can be an exercise in frustration. Static websites and informational pamphlets, while well-intentioned, often fail to address a user's specific, situational query. They present a wall of text that the user must parse themself. This model of information delivery is profoundly mismatched with the user's state of mind. Research on cognitive scarcity suggests that people facing intense stress have diminished mental bandwidth, making it harder to process complex information and make long-term plans. A dense tenants' rights handbook is of little use to someone who needs an immediate answer to a single, urgent question.
This challenge is not limited to tenants. Small landlords, who often operate without legal counsel, can easily make unintentional procedural errors. Returning a security deposit late or failing to provide proper written notice for entry can escalate a preventable disagreement into a costly and emotionally draining legal dispute. Small landlords, too, lack a simple, direct way to get answers for their specific situations without retaining expensive legal help for every minor question.
Existing legal technology tends to prioritize tools for contract analysis, e-discovery, and case management, serving the legal professionals and large firms who can afford them [4]. This profoundly underserves the most vulnerable parties in the housing ecosystem, specifically individual tenants and small-scale landlords. The central challenge, therefore, was to design a tool that could cut through the complexity and provide interactive, conversational guidance. The promise of conversational AI is its modality; it allows a user to ask a question in their own words and receive a synthesized, direct response—a model far better suited to a moment of crisis than a search engine or a static document.
To overcome this challenge, I sought to combine the power of large language models with a structured, safety-first approach. Building upon decades of prior research in HCI on bias and trust [5], I established clear design criteria to guide the project. These were not technical specifications but ethical premises.
Prioritize factual safety over conversational fluency. In the context of legal rights, a confidently wrong answer is more dangerous than no answer at all. The system's primary goal must be to avoid hallucination, the phenomenon where LLMs invent plausible but incorrect information. A hallucinated response could lead a user to miss a critical deadline or give up a legal right they possess. To prevent this, the architecture had to be fundamentally constrained, even if it meant sacrificing some of the fluid, creative conversational abilities of a pure language model. Reliability had to be the cornerstone of the design.
This is a case study for practitioners on what it takes to move an AI tool from concept to a responsible, real-world implementation that addresses a complex human need.
Acknowledge and mitigate inherent data bias. Perfect, unbiased datasets rarely exist in the real world. This is especially true in the legal domain, where information is often created for a specific audience. I knew that the data sources I would use to build the bot's knowledge base would have an inherent perspective. The design process, therefore, had to include a transparent assessment of these sources, identifying their potential biases and building features to counteract them. The tool should not present itself as a perfectly neutral oracle, but rather as a practical interpreter of available information, being honest about the lens through which that information is presented.
Design for trustworthy uncertainty. A responsible tool must know what it doesn't know. The danger of many AI systems is their tendency to "overclaim" knowledge, providing a response to any query, regardless of its relevance or the system's actual data. I decided that my bot needed to be designed to fail gracefully. When faced with an ambiguous or out-of-scope query, the system should clearly communicate its limitations to the user and, where possible, guide them toward appropriate human experts. This is not a failure of the system, but a critical feature that builds user trust and ensures responsible use.
Ensure accessibility and simplicity. To serve those most affected by the housing crisis, the tool must be accessible on the devices they use most. Data from the Pew Research Center confirms that a significant majority of low-income Americans rely on smartphones for Internet access, and many are "smartphone dependent" [6]. This meant a simple, text-based, mobile-first interface was not just a design choice but also a requirement for digital equity. The experience had to be direct, easy, and immediately usable without a steep learning curve.
Figure 1 depicts the architecture of the Landlord-Tenant Rights Bot, which is built on three integrated components hosted on the Hugging Face platform: a user interface, a large language model, and a retrieval-augmented generation pipeline connected to a curated vector database. The core of the system is the retrieval-augmented generation (RAG) architecture. This multistep process is the key to ensuring factual safety. When a user asks a question, the system first converts that query into a numerical representation, or embedding, which captures its semantic meaning. This embedding is then used to perform a similarity search across a custom vector database, which contains thousands of chunked legal summaries from all 50 U.S. states. The most semantically relevant text chunks are retrieved and dynamically inserted into a new, detailed prompt that is sent to the language model (OpenAI's GPT-3.5 Turbo). The model is then explicitly instructed to synthesize a clear, human-readable answer based primarily on the provided text. This entire process can be visualized as a focused user journey, as illustrated in Figure 2.
![]() | Figure 1. The RAG architecture ensures the chatbot's responses are grounded in a curated database of legal summaries, acting as a critical safety feature. |
![]() | Figure 2. A focused user journey. The diagram shows how the bot deconstructs a user's real-world question. |
The vector database itself was populated by processing information scraped from reputable online resources that summarize state-level laws, such as Innago and the American Apartment Owners Association. The data curation was a significant undertaking. I implemented a verification process that prioritized content directly referencing state statutes (e.g., per Cal. Civil Code § 1941.1). This process helped anchor the summaries to primary legal sources. As noted in the design considerations, however, I recognized that these sources are primarily created for property owners. This meant the tone and framing of the content would likely reflect that perspective, a hypothesis my evaluation later confirmed.
This rigorous, multistep process ensures that for a query such as the one below, the chatbot's response is rooted in specific, verifiable information, providing actionable details while also including a necessary legal disclaimer.
User query (tenant): My landlord in Colorado says they can keep my whole security deposit because the carpet is dirty. Can they do that?
Chatbot response: In Colorado, landlords can generally deduct from a security deposit for damages beyond normal wear and tear. Routine cleaning, like basic carpet dirt from regular use, is typically considered "normal wear and tear" and cannot be deducted. However, significant damage like large stains, burns, or tears might be deductible. Colorado law (Col. Rev. Stat. § 38-12-103) requires your landlord to provide you with a written statement listing the specific reasons for any deductions within one month (or up to 60 days if specified in the lease) after you move out. If they don't provide this statement or wrongly withhold funds, you may be able to recover up to three times the amount wrongfully withheld, plus attorney fees. Disclaimer: This is for informational purposes only and is not legal advice.
To understand the real-world utility and reception of the bot, I conducted a formative evaluation with a cohort of 20 participants. This group was intentionally diverse, including 16 tenants currently facing various housing issues and four legal advocates who work directly with tenants. This was not a large-scale statistical validation, but an exploratory study designed to observe authentic interactions and gather rich qualitative feedback, as shown in Table 1. The insights were profound and fell into three key themes.
![]() | Table 1. Preliminary results from a pilot study showed high efficacy for factual queries but also highlighted challenges related to user trust and nuance. |
First, the tool provided immediate and tangible value as "informational first aid." Participants universally appreciated getting an instant response 24-7. In a moment of crisis, the act of getting any clear information can be empowering and can significantly reduce anxiety. It provides a starting point for action and a sense of agency where there was previously only confusion. As one participant who was dealing with an urgent repair issue put it: "I can't get ahold of anyone at legal aid until next week. Just getting this… It's something. It gives me words for what's happening." This highlights the power of immediacy. The bot successfully filled a critical gap between the moment a problem arose and the moment a user could access human help.
Second, the evaluation surfaced the deep challenges of trust and tonal bias. While the bot was perceived as factually accurate, the experienced users, particularly the legal advocates, noted a subtle but perceptible "landlord-leaning" tone. They felt the bot stated the law correctly, but without conveying the protective spirit behind tenant rights. It presented the law as a set of neutral rules, failing to acknowledge the power dynamics it is meant to regulate. One advocate provided a particularly sharp critique that captures this nuance perfectly: "This is technically correct, but it's not how I would explain it to a tenant. It's missing the 'why'—that this law exists to protect you. It feels very sterile." This feedback powerfully confirmed that the source and framing of our data deeply influence the user's perception of trustworthiness. An "ally" needs to not only provide correct information but also speak in a voice that feels supportive and that's aligned with the user's perspective.
Finally, the evaluation revealed a fundamental disconnect between the chatbot's function as an informational snapshot and the user's need to navigate a sequence of future events. The system is designed to answer, "What is the law about X?" But a person in a precarious situation is implicitly asking, "If I use law X, what will happen to me next?" This critical gap between information and strategy was perfectly illustrated by one user's thought process as she used the bot: "Great, so I can send my landlord a formal letter about the leaky roof. But what happens when he gets it and just turns around and tries to evict me for complaining? That's what I'm actually worried about."
While my attempt to build an accessible tool was successful on a technical level, the true work lies in navigating the complex sociotechnical challenges that arise when technology meets human vulnerability.
This comment is far more insightful than a simple request for advice. The user isn't asking two separate questions: They are trying to map out a chain of cause and effect. They are attempting to storyboard the consequences of their actions. Our chatbot, in its current form, can provide a static snapshot of the law on repairs. It can even provide a separate snapshot on retaliatory eviction. But it cannot show how asserting the first right might trigger the violation of the second. It cannot help the user navigate the strategic and emotional calculus of their situation. This highlights a crucial frontier for design in this space: moving beyond providing static legal facts and toward creating tools that help users visualize pathways, understand potential retaliations, and build a safe strategy for action.
My exploration with the Landlord-Tenant Rights Bot underscores the importance of humility in design. While my attempt to build an accessible tool was successful on a technical level, the true work lies in navigating the complex sociotechnical challenges that arise when technology meets human vulnerability. The endeavor to address the shortcomings of the information gap serves as a stepping-stone. It inspires me, and I hope the broader HCI community, to recognize the delicate balance between technical innovation and the substance of human experience. It is a reminder that, beyond innovative design elements, content strength, tonal alignment, and an understanding of narrative context remain the cornerstones of effective design. These are elements that resonate most deeply and demand continuous refinement and thoughtful adaptation in future ventures. Building tools to address deep social problems such as the housing crisis is not simply a matter of writing better code; it is a matter of exercising better judgment and centering the lived reality of the people we aim to serve.
1. Princeton University Eviction Lab. Eviction tracking system; http://bit.ly/4nHjsjt
2. Friedman, B. and Nissenbaum, H. Bias in computer systems. ACM Transactions on Information Systems 14, 3 (1996), 330–347.
3. Lewis, P. et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. Proc. of the 34th International Conference on Neural Information Processing Systems. ACM, 2020, Article 793, 9459–9474.
4. Legal Services Corporation. The justice gap: Measuring the unmet civil legal needs of low-income Americans. June 2017; http://bit.ly/3JXmtNZ
5. American Bar Association. ABA Task Force releases report on AI's opportunities, challenges for the legal profession. Aug. 5, 2024; http://bit.ly/4m8br5K
6. Pew Research Center. Mobile fact sheet. Nov. 13, 2024; http://bit.ly/469WQko
Nischal Subedi is a Ph.D. candidate in statistics and data science at the University of Delaware. His research focuses on the application of artificial intelligence to address high-stakes social challenges, with a specific interest in housing equity, access to justice, and the ethical design of sociotechnical systems. [email protected]
This work is licensed under Creative Commons Attribution International 4.0.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.
Post Comment
No Comments Found