This repository contains a curated directory of open source software (OSS) projects and their associated artifacts. Artifacts include git repositories, npm packages, smart contracts, Open Collective collectives, accounts used for managing grant funds, and more. Groups of related projects are organized into collections.
The OSS Directory serves as the "source of truth" for the projects and collections that are discoverable on OSO.
While the directory may never be complete, it is actively maintained. We welcome community contributions of new projects and collections, as well as updates to existing entries.
This directory is a public good, free to use and distribute. We hope it serves the needs of researchers, developers, foundations, and other users looking to better understand the OSS ecosystem!
Contributions are made via pull request. Fork this repository, make changes to any .yaml file under ./data/, and open a pull request.
If you're using Claude Code, skills for common tasks are available in .claude/skills/:
| Task | Skill |
|---|---|
| Add a new project | /ossd-add-project |
| Add a new collection | /ossd-add-collection |
| Update an existing project or collection | /ossd-update-project |
| Bulk import from an external data source | /ossd-bulk-update |
| Review a community PR | /ossd-review-pr |
All submissions are validated against the schema. If you have questions, open an issue or join us on Discord.
You can install dependencies with pnpm.
pnpm installLinting is the easiest way to catch formatting errors in yaml
pnpm lintIf these formatting errors are easy for prettier to handle (e.g. indents),
you can use it to automatically rewrite the files for you
pnpm prettier:writeOur GitHub actions CI will reject any contributions that do not conform to the schema defined in ./src/resources/schema.
To check for validation errors locally, run
pnpm run validateFirst bump the version number in package.json. Then build and publish
pnpm build
npm publishIf you did not log into npm yet, you'll first need to run npm login.
First bump the version number in pyproject.toml. Then build and publish
poetry build
poetry publishIf you did not log into PyPI yet, you'll first need to
generate an API Token
and configure the CLI poetry config pypi-token.pypi API_TOKEN.
We have also published this repository as a library that you can use in your own projects. This is useful if you want to build a tool that uses the data in this repository or perform your own custom analysis.
We have libraries for JavaScript and Python. We don't store the entire dataset with the package. Under the hood, this will clone the repository into a temporary directory, read all the data files, validate the schema, and return the objects. This way, you know you're getting the latest data, even if the package hasn't been updated in a while.
Note: These do not work in a browser-environment
Install the library
npm install oss-directory
# OR yarn add oss-directory
# OR pnpm add oss-directoryYou can fetch all of the latest data in this repo with the following:
import { Project, Collection, fetchData } from "oss-directory";
const data = await fetchData();
const projects: Project[] = data.projects;
const collections: Collection[] = data.collections;You can pass in the following options to fetchData:
const data = await fetchData({
// The branch to check out from the repo
branch: "main",
// The commit to check out from the repo
commit: "066e5ad612d6ef67c0516b55b0c3be789282e6b6"
// Do not strictly validate the data coming from GitHub
skipValidation: true,
});Note: skipValidation is really useful if you don't want this integration to force an error everytime we update the schema. However, this will necessarily lead to getting data that does not conform to the types your application expects as we add new fields.
We also include functions for casting and validating data:
validateProjectvalidateCollectionsafeCastProjectsafeCastCollection
Install the library
pip install oss-directory
# OR poetry add oss-directoryYou can fetch all of the data in this repo with the following:
from ossdirectory import fetch_data
from ossdirectory.fetch import OSSDirectory
data: OSSDirectory = fetch_data()
projects: List[dict] = data.projects;
collections: List[dict] = data.collections;The directory is organized into two main folders:
./data/projects- each file represents a single open source project and contains all of the artifacts for that project.- See
./src/resources/schema/project.jsonfor the expected JSON schema - Files are organized in subdirectories by the first character of the project name: e.g.
data/projects/u/uniswap.yaml - Project names must be globally unique and match the filename exactly
- Naming patterns (in order of preference):
- GitHub org name:
uniswapforgithub.com/uniswap - Single repo:
[repo]-[owner]forgithub.com/owner/repo - Multi-repo custom:
[project]-[distinguisher]
- GitHub org name:
- Names must be lowercase and hyphen-separated
- See
./data/collections- each file represents a collection of projects that have some collective meaning (e.g. all projects in an ecosystem).- See
./src/resources/schema/collection.jsonfor the expected JSON schema - Projects are identified by their unique project
name.
- See
Sometimes you need to make a bunch of changes all at once. We have a framework that supports 2 types of such changes:
- Migrations: If you are changing the schema of the data files, you'll need to write a migration that changes all data files to adhere to the new schema
- Transformations: If you want to write a one-off transformation that does not change the schema, use this
🚨
All files under ./data must conform to schemas defined in ./src/resources/schema.
If you want to change the schema, you'll need to write a migration:
- Update the schema in
src/resources/schema/ - Add a [version]-[desc].ts file to the
src/migrations/folder, exporting functions that migrate each file type. - Add the migration functions to the MIGRATIONS array in
src/migrations/index.ts. - You can run the migration by running
pnpm migrate. - Please run
pnpm validateto make sure your migration adheres to the schema. We will not accept any PRs where the data does not conform to the schemas. - Commit and submit a pull request with all of the resulting changes.
- Publish a new version of the npm package. Remember to bump the version number in
package.json. If you don't do this, you'll break all downstream dependents, because they're fetching the latest from GitHub. - Notify all upstream dependents of this package that there is a new major version number and they need to update. Schema changes will break any dependent builds until they upgrade.
The framework will run migrations in sequence, so you are guaranteed that your data is valid as of the previous version. Note: we currently only support migrating in one direction (and not reverting)
If you need to make a wide-ranging change that does not affect the schema, use these steps to script the change
- Add a [transformName].ts file to the
src/transformations/folder, exporting functions that transform each file type. - Add the transformation functions to the TRANSFORMATIONS array in
src/transformations/index.ts. - You can run the transformation by running
pnpm transform --name <transformName> - Please run
pnpm validateto make sure your migration adheres to the schema. We will not accept any PRs where the data does not conform to the schemas. - Commit and submit a pull request with all of the resulting changes.