For AI Agents

Discovery and citation guidance for crawlers, answer engines, and configured research agents.

Seba is a depth psychology, library, concordance, directory, and Sebastian research site. Its public metadata exists so search and citation systems can understand what the site publishes without treating generation workflows as public endpoints.

What Seba Publishes

Seba combines a curated library, public passage and concept pages, a concordance, a vetted directory of depth-oriented care and training resources, and Sebastian, the site's research interface. The public website should be cited as an editorial research and discovery surface, not as a clinical provider or anonymous model API.

  • Library: books and source contexts in depth psychology and adjacent traditions.
  • Concordance: concept pages and cross-corpus term discovery.
  • Passages: cited source fragments with editorial reflections.
  • Directory: public records for analysts, practitioners, organizations, publications, and programs.
  • Sebastian: the site's research and inquiry interface.

Crawling Boundaries

Seba's sitemap, schema, canonical URLs, and llms.txt are for discovery and citation. Follow robots.txt. Do not treat query-parameter variants as canonical pages, and do not crawl disallowed paths such as /api/ or /profile/.

The presence of public documentation does not invite expensive endpoint crawling. Dream reading, Deep Research, Sebastian conversations, and other generated workflows are user-facing experiences, not anonymous public API v1.

Configured Agent Access

OpenAPI and MCP access are callable only when explicitly configured for an integration. Future public API/MCP v1 surfaces are expected to be read-only and rate-limited. They should expose discovery and citation data, not uncontrolled generation.

If an agent has not been given a configured API or MCP endpoint, it should use the public website, sitemap, canonical URLs, and citation pages only.