Datasets:

housing_qa / README.md
nguha's picture
Update README.md
761550c verified
---
language:
- en
pretty_name: Housing QA
task_categories:
- question-answering
tags:
- law
- legal
- retrieval
- statutes
- housing
- QA
size_categories:
- 1K<n<10K
license: cc-by-sa-4.0
---
`HousingQA` is a benchmark evaluating LLMs' abilities to answer questions about housing law in different states. See [the releasing paper](https://reglab.github.io/legal-rag-benchmarks/) for more information on construction.
`HousingQA` supports the following research questions:
- **Knowledge**: Whether LLMs can answer questions about state housing law in the year 2021 using only the knowledge encoded in their weights.
- **Comprehension**: Whether LLMs can answer questions about state housing law, given an excerpt of the relevant statutes.
- **RAG**: Whether retrieval systems can identify the relevant housing statutes for a question about a state, and an LLM provided with these statutes can answer the question correctly.
`HousingQA` consists of three subsets: `questions`, `questions_aux`, and `statutes`. `questions_aux` is a larger set of questions (containing all samples in `questions` and more) but **does** not contain statute annotations. Thus, it cannot be used for evaluating RAG. `statutes` contains a database of statutes that can be used for evaluating full RAG pipelines.
`HousingQA` was constructed by collecting questions from the [Legal Services Corporation Eviction Laws Database](https://www.lsc.gov/initiatives/effect-state-local-laws-evictions/lsc-eviction-laws-database), and scraping statutes from [Justia](https://law.justia.com/codes/).
**Please note that because all questions and statutes are only accurate as of 2021, nothing in this dataset should be construed as legal advice or used to provide legal advice**.
## Loading data
```python
from datasets import load_dataset
# Load questions
questions = load_dataset("reglab/housing_qa", "questions", split="test")
# Load questions_aux
questions_aux = load_dataset("reglab/housing_qa", "questions_aux", split="test")
# Load statutes
statutes = load_dataset("reglab/housing_qa", "statutes", split="corpus")
```
## questions
`questions` consists of 6853 yes/no questions. The original LSC Database answered a common set of questions about housing law for each US state/territory ("jurisdiction"), according to the law in 2021. Each sample in this dataset corresponds to a different (question, jurisdiction) pair and has the following structure:
```json
{
"idx": 0,
"state": "Alabama",
"question": "Is there a state/territory law regulating residential evictions?",
"answer": "Yes",
"question_group": 69,
"statutes": [
{
"statute_idx": 431263,
"citation": "ALA. CODE \u00a7 35-9A-141(11)",
"excerpt": "(11) \u201cpremises\u201d means a dwelling unit and the structure of which it is a part and facilities and appurtenances therein and grounds, areas, and facilities held out for the use of tenants generally or whose use is promised by the rental agreement to the tenant;"
}
],
"original_question": "Is there a state/territory law regulating residential evictions?",
"caveats": [
""
]
}
```
where:
- `idx`: unique sample idx
- `state`: the state to which the question pertains.
- `question`: the question
- `answer`: the answer to the question
- `question_group`: the question type.
- `statutes`: the statutes that support the answer
- `statute_idx`: a foreign key corresponding to the `idx` column in the `statutes` split.
- `citation`: the statute citation
- `excerpt`: an excerpt from the statute
- `original_question`: the original question from the LSC database. This is sometimes identical to `question`. In other cases `question` corresponds to a rephrased version of this designed to have a yes/no answer.
- `caveats`: any caveats to the answer, from the LSC annotations.
Please see the paper for a full description of how scraping and question formatting occured. We note several significant limitations below:
- Not all states are represented, and certain states are missing for certain questions.
- Not all statutes included in the `statutes` field are necessary for answering the question.
## questions_aux
`questions_aux` consists of 9297 yes/no questions. Each question and answer are specific to a jurisdiction. Each row has the following structure:
```json
{
"idx": 0,
"state": "Alabama",
"question": "Is there a state/territory law regulating residential evictions?",
"answer": "Yes",
"question_group": 69,
"statutes": [
{
"citation": "ALA. CODE \u00a7 35-9A-141(11)",
"excerpt": "(11) \u201cpremises\u201d means a dwelling unit and the structure of which it is a part and facilities and appurtenances therein and grounds, areas, and facilities held out for the use of tenants generally or whose use is promised by the rental agreement to the tenant;"
}
],
"original_question": "Is there a state/territory law regulating residential evictions?",
"caveats": [
""
]
}
```
where:
- `idx`: unique sample idx
- `state`: the state to which the question pertains.
- `question`: the question
- `answer`: the answer to the question
- `question_group`: the question group. Questions repeated in different jurisdictions have the same `question_group` value.
- `statutes`: a list of the statutes that support the answer
- `citation`: the statute citation
- `excerpt`: an excerpt from the statute
- `original_question`: the original question from the LSC database. This is sometimes identical to `question`. In other cases `question` corresponds to a rephrased version of this designed to have a yes/no answer.
- `caveats`: any caveats to the answer, from the LSC annotations.
If you want to prompt a LLM to answer the question, we recommend explicitly providing the state and specifying that the question should be answered with respect the law in 2021.
```text
Consider statutory law for {state} in the year 2021. {question}
Answer "Yes" or "No".
Answer:
```
## statutes
The `statutes` contains approximately 1.7 million statutes collected Justia. Note:
- Not all states are represented
- For each state, not all statutes are captured
Data has the following columns:
- `citation`: statute citation
- `path`: a string containing the headers for the statute
- `state`: the state for which the statute belongs
- `text`: the text of the statute
- `idx`: a unique idx for the statute. This maps to the `statute_idx` column in the `rc_questions` split.
## Prompting
If you want to prompt a LLM to answer the question, we recommend explicitly providing the state and specifying that the question should be answered with respect the law in 2021.
```text
Consider statutory law for {state} in the year 2021. Read the following statute excerpts which govern housing law in this state, and answer the question below.
Statutes ##################
{statute_list}
Question ##################
{question}
Answer "Yes" or "No".
Answer:
```