From the Journal

Why We Built Our Own Platform Instead of Using ChatGPT

/ Sage
Journal hero: It would have been faster to wrap a generic chatbot in our branding. Here is why we did the harder thing instead, and...

The fastest way to put “AI-powered readings” on a website in 2026 is to take a general-purpose chatbot, wrap it in your branding, write a system prompt that tells it to behave like a tarot reader, and ship. Many practices have done exactly that. Most of them are not honest about what is underneath.

We did not take that route. The coaching platform at coachingplatform.lostintheastral.com is something we built from the ground up. This piece explains why, and what the difference produces in your reading.

A platform can enforce real data, real history, and real boundaries, a wrapped chatbot cannot.

What you actually get from a wrapped chatbot

A general-purpose model, no matter how good, has three properties that matter when it comes to reading you:

  1. It does not know your data. It cannot calculate a precision birth chart from real ephemeris. When you ask about your Saturn return, it generates plausible-sounding language about Saturn returns in general, decorated with whatever you typed in.

It does not look at your actual chart. 2. It does not know your situation. Each conversation starts cold.

It cannot reach back and read what you said in your last session, what your 9-Self map showed two months ago, or what pattern the practitioner already named. 3. It is optimized for fluency, not honesty. The training objective is to produce text humans rate as helpful and pleasant.

That objective actively pushes the model away from naming things that would be uncomfortable to hear, which is most of what makes a real reading useful.

A wrapper around a general chatbot inherits all three problems. The branding changes. The product underneath is the same general assistant, with a costume.

What our platform does instead

Three things, none of which are available from a wrapped model.

Real ephemeris-backed readings. When the platform builds a chart, it is calculating from actual astronomical data, date, time, location, ephemeris tables. The Saturn position is correct because we computed it, not because a language model produced text that sounded astrological. The downstream interpretation is anchored to that real data, so when you read what the platform says about your Saturn return, the underlying configuration is yours and not generic.

Practitioner-encoded knowledge. The frameworks the platform applies, the 9-Self map, SRP counseling structure, the divinatory work informed by decades of comparative-religion study, were built by practitioners over years inside actual sessions. They were not extracted from internet text. The platform is operating on a structure that has been stress-tested against real people; it is not free-associating from training data. We covered the operational meaning of that in What Practitioner-Encoded AI Means.

Continuity across sessions. When you come back, the platform remembers what your map looked like, what was active last time, which patterns the practitioner already named. The next read picks up where the last one left off. A general chatbot cannot do this, every session is a fresh start. Continuity is what lets readings stack instead of resetting.

Why we did not just write a system prompt

Why we did not just write a system prompt (why-we-built-our-own-platform-instead-of-chatgpt editorial still)

A common shortcut is to take a general chatbot, hand it an enormous system prompt that tells it to behave like a serious practitioner, give it fake context, and call that “practitioner-encoded.”

It does not work. The model still does not have your real chart. It does not have your real session history.

It does not have access to the framework structure, only to text describing the framework, which is not the same thing. And the underlying optimization for pleasant fluency is not removed by the prompt; the model just performs the role you asked for, while still pulling toward the answers that feel softest.

Real practitioner-encoding requires the data layer to actually exist. The frameworks have to be implemented, not described. The session history has to be stored and queryable, not summarized into a prompt. The boundaries, what the system will and will not do, like Ifá readings without an initiated practitioner, have to be enforced structurally, not requested politely in a prompt that the model can ignore under pressure.

That kind of enforcement only happens if you build the system. So we built the system.

The honest cost

This route is slower. It takes more engineering. It costs more to run because we are not getting the leverage of a single general API call per question.

We did not pick this route because we wanted the technical challenge. We picked it because the alternative, wrapping a chatbot, was structurally incapable of doing the thing we promise: a reading that operates on your real data, runs on practitioner-encoded frameworks, and remembers you between sessions.

Anyone selling “AI-powered readings” without those three properties is selling a different product than the one we are selling. Both can be useful in narrow ways. Only one is what we mean when we say a reading.

Where AI sits in the work

To be clear about what AI does and does not do here:

AI handles the parts of the work it is genuinely good at, calculation, cross-referencing, pattern surfacing across your structured data, drafting language for the practitioner to review. It does not replace the practitioner. The live read is still human. The boundary that we wrote about in AI is Not Your Nervous System and What Makes a Practitioner Different from a Search Bar still holds inside the platform.

What changes is the resolution. A practitioner working with the platform’s prepared output is reading you at a different depth than one who is starting from a blank chart and your first description of the problem. That higher resolution is what the platform exists to produce.

The plain version

A wrapper around a chatbot is fast to ship and structurally cannot do what we are trying to do. Our platform is slow to build and can. That is the choice.

If you have used “AI-powered” tools before and felt like the depth was not there, the gap was not your imagination. It was usually a wrapper. The difference between a wrapper and the platform we built shows up in what comes back to you.

In plain words

It would have been faster to wrap a generic chatbot in our branding. Here’s why we did the harder thing instead, and why it matters for what shows up in your reading.


See: Why work with both AI tools and a human practitioner? for the short version. Or jump in: the coaching platform.

Journal

https://lostintheastral.com/blog/why-we-built-our-own-platform-instead-of-chatgpt

Scott Hinojosa Sage