<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2604436&amp;fmt=gif">
The Redis logo and Redis name with Docugami, in an image of icons demonstrating various digital capabilities.
Redis Docugami Mobile
Document Engineering

New LLM Stack & ML Ops: Docugami Chooses Redis Enterprise to Scale Up Document Processing Pipeline

Redis-Docugami Case Study

Modern AI pipelines demand a cutting-edge technology stack. That’s why we selected Redis Enterprise to power ours. 



The past few months have seen an explosion of interest in the technology stack needed to support Generative AI. Sequoia Capital’s recent publication, The New Language Model Stack, illuminates many different models that individual companies may pursue, and the exponential growth and massive potential of Generative AI to transform virtually every sector. Similarly, the rapid growth of the MLOps Community demonstrates the diversity and strength of the ML/AI community.  

At Docugami, we’ve been creating Generative AI for Business Documents for several years, so we’ve been thinking deeply about the AI technology stack and related issues long before ChatGPT thrust them into the broader consciousness. The current discussion has inspired us to share some of Docugami’s real-world Machine Learning Operations best practices. We will start today by sharing why we chose Redis Enterprise to scale up our document processing pipeline. 

Unlocking Scale and Value with Redis Enterprise 

Docugami is a proprietary Business Document Foundation Model in production, a Large Language Model (LLM) and Layout Models for Generative AI applied to your own business documents, where the human-in-the-loop is the Business User.

Docugami is in the market today, cross-segments in a variety of vertical industry segments, including Commercial Insurance, Commercial Real Estate, Technology, a wide range of Professional Services sectors, and more.  

Being in production enables us to precisely measure and value our efficiency. Our adoption of Redis Enterprise has led to remarkable improvements in our ML pipeline, ML Ops, and overall document processing operations. Redis Enterprise is helping us deliver on our commitment to quality and efficiency through better chunking, a more efficient vector database, and dramatic advances in scalability. We're thrilled with these results, and remain committed to continuously seeking out technologies and strategies that will enhance the capabilities of our document processing pipeline.  

Apache Spark: Better with Redis Enterprise 

Redis Enterprise has been integrated into our ML pipeline, specifically as a backing store for Spark. The Docugami Foundation Model, designed for multi-modal (image, text, and hierarchical structure) processing, requires GPU and CPU nodes in Spark. Previously, our Spark operations were supported by standard storage technologies typically used in the Spark Community, but these proved less than optimal with significant latencies and unreliability given the high-frequency access patterns of Spark. 

With Redis Enterprise, we leverage the Redis Operator for Kubernetes, resulting in remarkable enhancements in both performance and cost of goods sold (COGS), illustrating the power of this modern, high-performance database solution. 

Redis Enterprise: Enhancing our Document XML Knowledge Graph 

Our transition to Redis Enterprise also extends to the persistence layer for hierarchical chunks identified in documents. These chunks, as well as user feedback on them, are critical to our operations. Through Redis, we've seen a dramatic increase in Document XML Knowledge Graph writing performance and a notable reduction in COGS. These operational improvements have facilitated a more efficient, reliable document processing workflow. 

Redis VectorDB: Powering Document Chat and Retrieval 

Redis VectorDB stores chunk embeddings, which are crucial in automatically propagating user feedback across document sets. It is also used for chat-based retrieval from business documents, consisting of forests of XML trees. This functionality not only improves our document understanding capability but also accelerates the feedback loop, enhancing the overall user experience. 

The integration of Redis VectorDB enables us to handle document sets more efficiently, improving the consistency and accuracy of our document processing efforts. 

See Docugami In Action

Docugami is available for anyone to use today, with free trials and a developer API at docugami.com. We are constantly improving and engaging with the community, including our recently released integrations with LangChain, LlamaIndex, and Hugging Face

We will continue sharing about the Language Model Stack that we use to build our proprietary Business Document Foundation Model and our MLOps Best Practices. 

As always, we encourage the Docugami community to engage with us through our social channels, including LinkedIn, @docugami on Twitter, and the Docugami Discord. We welcome your technical questions, feedback, and insights, and look forward to sharing more of our journey with you all. 

Happy processing! 

Similar posts

Get noticed on the latest Document Engineering insights

Be the first to know about the latest news, use cases, and innovative features.