In an period the place information privateness and effectivity are paramount, funding analysts and institutional researchers could more and more be asking: Can we harness the facility of generative AI with out compromising delicate information? The reply is a powerful sure.
This publish describes a customizable, open-source framework that analysts can adapt for safe, native deployment. It showcases a hands-on implementation of a privately hosted giant language mannequin (LLM) software, personalized to help with reviewing and querying funding analysis paperwork. The result’s a safe, cost-effective AI analysis assistant, one that may parse 1000’s of pages in seconds and by no means sends your information to the cloud or the web. I take advantage of AI to enhance the method of funding evaluation by means of partial automation, additionally mentioned in an Enterprising Investor publish on utilizing AI to enhance funding evaluation.
This chatbot-style software permits analysts to question advanced analysis supplies in plain language with out ever exposing delicate information to the cloud.
The Case for “Non-public GPT”
For professionals working in buy-side funding analysis — whether or not in equities, mounted revenue, or multi-asset methods — the usage of ChatGPT and comparable instruments raises a significant concern: confidentiality. Importing analysis stories, funding memos, or draft providing paperwork to a cloud-based AI software is normally not an possibility.
That’s the place “Non-public GPT” is available in: a framework constructed fully on open-source parts, working domestically by yourself machine. There’s no reliance on software programming interface (API) keys, no want for an web connection, and no threat of knowledge leakage.
This toolkit leverages:
- Python scripts for ingestion and embedding of textual content paperwork
- Ollama, an open-source platform for internet hosting native LLMs on the pc
- Streamlit for constructing a user-friendly interface
- Mistral, DeepSeek, and different open-source fashions for answering questions in pure language
The underlying Python code for this instance is publicly housed within the Github repository right here. Further steerage on step-by-step implementation of the technical elements on this mission is supplied on this supporting doc.
Querying Analysis Like a Chatbot With out the Cloud
Step one on this implementation is launching a Python-based digital atmosphere on a private laptop. This helps to keep up a singular model of packages and utilities that feed into this software alone. Because of this, settings and configuration of packages utilized in Python for different purposes and packages stay undisturbed. As soon as put in, a script reads and embeds funding paperwork utilizing an embedding mannequin. These embeddings permit LLMs to grasp the doc’s content material at a granular stage, aiming to seize semantic which means.
As a result of the mannequin is hosted through Ollama on an area machine, the paperwork stay safe and don’t depart the analyst’s laptop. That is notably vital when coping with proprietary analysis, personal financials like in personal fairness transactions or inner funding notes.

A Sensible Demonstration: Analyzing Funding Paperwork
The prototype focuses on digesting long-form funding paperwork equivalent to earnings name transcripts, analyst stories, and providing statements. As soon as the TXT doc is loaded into the designated folder of the private laptop, the mannequin processes it and turns into able to work together. This implementation helps all kinds of doc varieties starting from Microsoft Phrase (.docx), web site pages (.html) to PowerPoint shows (.pptx). The analyst can start querying the doc by means of the chosen mannequin in a easy chatbot-style interface rendered in an area internet browser.
Utilizing an internet browser-based interface powered by Streamlit, the analyst can start querying the doc by means of the chosen mannequin. Although this launches a web-browser, the applying doesn’t work together with the web. The browser-based rendering is used on this instance to reveal a handy person interface. This may very well be modified to a command-line interface or different downstream manifestations. For instance, after ingesting an earnings name transcript of AAPL, one could merely ask:
“What does Tim Cook dinner do at AAPL?”
Inside seconds, the LLM parses the content material from the transcript and returns:
“…Timothy Donald Cook dinner is the Chief Govt Officer (CEO) of Apple Inc…”
This result’s cross-verified inside the software, which additionally exhibits precisely which pages the knowledge was pulled from. Utilizing a mouse click on, the person can increase the “Supply” objects listed under every response within the browser-based interface. Completely different sources feeding into that reply are rank-ordered based mostly on relevance/significance. This system may be modified to checklist a distinct variety of supply references. This function enhances transparency and belief within the mannequin’s outputs.
Mannequin Switching and Configuration for Enhanced Efficiency
One standout function is the power to modify between completely different LLMs with a single click on. The demonstration displays the potential to cycle amongst open-source LLMs like Mistral, Mixtral, Llama, and DeepSeek. This exhibits that completely different fashions may be plugged into the identical structure to match efficiency or enhance outcomes. Ollama is an open-source software program package deal that may be put in domestically and facilitates this flexibility. As extra open-source fashions turn out to be obtainable (or present ones get up to date), Ollama allows downloading/updating them accordingly.
This flexibility is essential. It permits analysts to check which fashions finest swimsuit the nuances of a selected process at hand, i.e., authorized language, monetary disclosures, or analysis summaries, all without having entry to paid APIs or enterprise-wide licenses.
There are different dimensions of the mannequin that may be modified to focus on higher efficiency for a given process/goal. These configurations are usually managed by a standalone file, usually named as “config.py,” as on this mission. For instance, the similarity threshold amongst chunks of textual content in a doc could also be modulated to determine very shut matches by utilizing excessive worth (say, higher than 0.9). This helps to cut back noise however could miss semantically associated outcomes if the edge is simply too tight for a selected context.
Likewise, the minimal chunk size can be utilized to determine and weed out very quick chunks of textual content which are unhelpful or deceptive. Vital concerns additionally come up from the alternatives of the dimensions of chunk and overlap amongst chunks of textual content. Collectively, these decide how the doc is cut up into items for evaluation. Bigger chunk sizes permit for extra context per reply, however may dilute the main target of the subject within the closing response. The quantity of overlap ensures easy continuity amongst subsequent chunks. This ensures the mannequin can interpret data that spans throughout a number of elements of the doc.
Lastly, the person should additionally decide what number of chunks of textual content among the many prime objects retrieved for a question ought to be centered on for the ultimate reply. This results in a stability between velocity and relevance. Utilizing too many goal chunks for every question response would possibly decelerate the software and feed into potential distractions. Nevertheless, utilizing too few goal chunks could run the chance of lacking out vital context that will not at all times be written/mentioned in shut geographic proximity inside the doc. Along side the completely different fashions served through Ollama, the person could configure the best setting of those configuration parameters to swimsuit their process.
Scaling for Analysis Groups
Whereas the demonstration originated within the fairness analysis area, the implications are broader. Mounted revenue analysts can load providing statements and contractual paperwork associated to Treasury, company or municipal bonds. Macro researchers can ingest Federal Reserve speeches or financial outlook paperwork from central banks and third-party researchers. Portfolio groups can pre-load funding committee memos or inner stories. Purchase-side analysts could notably be utilizing giant volumes of analysis. For instance, the hedge fund, Marshall Wace, processes over 30 petabytes of knowledge every day equating to just about 400 billion emails.
Accordingly, the general course of on this framework is scalable:
- Add extra paperwork to the folder
- Rerun the embedding script that ingests these paperwork
- Begin interacting/querying
All these steps may be executed in a safe, inner atmosphere that prices nothing to function past native computing sources.
Placing AI in Analysts’ Fingers — Securely
The rise of generative AI needn’t imply surrendering information management. By configuring open-source LLMs for personal, offline use, analysts can construct in-house purposes just like the chatbot mentioned right here which are simply as succesful — and infinitely safer — than some business alternate options.
This “Non-public GPT” idea empowers funding professionals to:
- Use AI for doc evaluation with out exposing delicate information
- Scale back reliance on third-party instruments
- Tailor the system to particular analysis workflows
The total codebase for this software is accessible on GitHub and may be prolonged or tailor-made to be used throughout any institutional funding setting. There are a number of factors of flexibility afforded on this structure which allow the end-user to implement their alternative for a particular use case. Constructed-in options about inspecting the supply of responses helps confirm the accuracy of this software, to keep away from frequent pitfalls of hallucination amongst LLMs. This repository is supposed to function a information and start line for constructing downstream, native purposes which are ‘fine-tuned’ to enterprise-wide or particular person wants.
Generative AI doesn’t must compromise privateness and information safety. When used cautiously, it may well increase the capabilities of execs and assist them analyze data quicker and higher. Instruments like this put generative AI straight into the palms of analysts — no third-party licenses, no information compromise, and no trade-offs between perception and safety.
