Leveraging Generative AI as a Non-AI Company


Nimble Gravity is an international data science, engineering, and digital transformation advisory firm. They leverage business acumen, data acquisition and engineering, and AI/ML techniques to generate impactful insights. In this guide, Tony Aug, Co-Founder and CEO, walks through how generative AI models work, use cases for a non-AI company, and how to refine your use of generative AI.

Table of Contents

What is generative AI? How is it different from other forms of AI?

Generative AI “makes” something based on inputs it could be generating data, a picture, or funny-sounding music. It generates an output based on a library of assets or knowledge that it’s trained on. 

Generative AI differs from other kinds of AI that do classification or prediction – AI is broad, and other categories will classify or make a prediction rather than generate something. For example, you can use AI that uses pattern recognition to predict the weather and in that case, it’s detecting something versus generating it. 

What is natural language processing? What is an LLM?

Natural Language Processing (NLP) is the field of artificial intelligence that deals with understanding language – NLP attempts to tackle understanding and interpreting human language. An example capability in this field would be a model that “reads” or processes a bunch of texts and can identify and categorize the different uses of the word “Brooklyn”. The model would be able to tell you when “Brooklyn” is being used in a company name, a person name, or a place name. It’s easy for a human but has historically been really complicated for artificial intelligence. 

An LLM (large language model) is trained to understand the way words relate to each other – LLMs have become possible with the advent of really big computers and GPUS. They’re trained off of a huge corpus of text (like the whole internet or all the books in the Library of Congress) and understand human-generated text by processing massive amounts of it. 

Natural Language Generation uses AI techniques to create text – NLG models will develop an understanding of language through natural language processing and use that to generate text. A language model that understands parts of speech and the way words relate to one another can start to do natural language generation. 

How do generative AI ingest and generation processes work?

Ingestion (tokenization) – the model will break up books, paragraphs, or even software into many different “tokens”. These can be understood as “syllables”, and the model parses and seeks to understand their relation to one another in their specific context (a sentence, paragraph or line of code). By parsing and processing tons of these syllables in relation to one another, AI can understand what a word, paragraph or sentence “means”.

Generation (using tokens to produce output) – once the AI has broken down a language into “tokens” or “syllables”, it can use that to create a text that where the arrangement of the different tokens follows the inferred logic of the language it’s been trained on and the output will probably be grammatically correct. It will use the same tokenization process to understand whatever prompt it’s been given. 

An example: differentiating and creating pictures of cats:

  • Ingestion/tokenization – the ML model could be trained by looking at and tokenizing the features that make a cat a cat: pointy ears, a certain nose, the tail, etc. Once it’s trained using a set of cat and dog images to look for those “tokens” (features) it will be able to score additional images that it receives until it can say the image is a cat or a dog.
  • Generation/creating output – the model would create and score images it created (called an adversarial network) to try to create an output until the score on the tokens (features of the cat) is 100. The AI’s goal would be to maximize its score. The same process happens with natural language generation—-a model generates sentences and tears them apart to score the meanings until it matches what the user prompted.

What are some practical business applications of generative AI?

There are three different types of generative AI business applications – there are overlaps between them, but different use cases can be categorized into

  • Primarily ingesting applications – this is the ingestion of data that the AI then organizes or analyzes (so the output is just a re-iteration of what was fed in).
  • Primarily generative applications – AI is prompted to generate net-new content or information
  • Both – use cases where there’s a relative balance between what’s told to the AI and how heavy a hand it has in creating a response.
Generative AI Business Use Cases
Analysis/Organize (primarily ingest)Create
(primarily generate)
• Summarize the skills section of a resume
• Categorize comments on your website as primarily positive, negative, neutral
• Analyze, structure, and bucket product reviews
• Write massive quantities of content quickly (e.g. product descriptions)
• Create custom landing pages (so a company from France would see a page that resonates with them)
• Create custom marketing language for your product targeted to different audiences
• Stock photo creation for marketing materials
• Copilot for software writing
• Website chatbot
• Alexa and Siri-type capabilities enabled by AI (AI virtual assistant)

Who are the main generative AI/NLP “infrastructure” players? What are the different categories of solutions?

OpenAI (Chat-GPT) – they have a big lead on every other model for text-based generative AI. Multimodal (image, video, audio, etc.) capabilities are a more open landscape. 

Google (Bard) – they’re the number two player and their text-processing is behind OpenAI, but right now they’re the only other real player in the space for text at the moment. 

Midjourney is the leader in generating images and analyzing/describing them – it’s really great for generating images, but they also have interesting image analysis capabilities. If you upload an image and ask Midjourney to describe it, it can tell you all kinds of details about what’s in the image.

Eventually, we’ll likely be in a world where “There’s an AI for that”  – it’s likely that as the space matures, there will be a plethora of purpose-built tools directed at certain tasks. The different tools will probably run on top of someone else’s infrastructure, there will be a unique tool that can apply AI or is enabled by AI for whatever task you’re trying to accomplish.

Use Cases 

How can you use generative AI in software development?

AI-enabled Copilot for developers provides valuable type ahead – Copilot makes suggestions to auto-finish your code and is really writing software for you, though you still have to have a really capable software developer using it. Copilot is used in the hopes of making developers faster and hopefully better. 

Tip: Read the terms of use and privacy policy before using any AI developer tools understand whether or not the code you enter to the tool is contributing back to some corpus. Some models will be trained off the code you enter and some are only ever trained on the open source code and GitHub. This can prevent you from leaking IP.

Prediction: AI tools likely mean a lot of Low-Code/ No-Code tools will disappear these do pretty basic and simple developer activities with a little no-code form. GPT and other tools will likely replace these. The fundamental shift is toward interacting with a computer the same way you would interact with a person—asking it, “Can you please do x,” and the result is pretty magical. 

How can you use generative AI in content generation? 

You can create content, write email, or generate any text – the content generation use cases are endless. Marketing and content generation are central use cases, but virtually everyone does some writing. You can draft an email and ask the AI to rephrase it to sound nicer.  

Tip: Look for tools that can make content generation easierthere’s a lot of infrastructure being built on top of the content generation tools. The Chat-GPT API lets you push a prompt and other parameters in to generate content—but you won’t necessarily remember the prompt. Now, there’s an interstitial layer of software (e.g. Scale AI’s tool) that helps you manage prompts and tuning.

How can you use generative AI in your sales function? 

Generating custom cold outbound emails – you can write custom emails that are going to seem like they’re written by a person. You can get a lot more personalized and interesting than your mail merge of old. 

Rapid data enrichment in your CRM from public data – if a company comes to your website, you might get firmographic details to understand what they’re doing and enrich it into your CRM via an API. Imagine you can go take all of the text off the homepage of the company, and ask an AI API to summarize what they do for you. Suddenly, researching leads is a lot easier.

How can you use generative AI in marketing and advertising? 

Generate marketing materials more efficiently – the creative marketer already has an account on Chat-GPT and is using it to personalize or draft their marketing content and language at scale. 

Individualized marketing – for example, AI can help marketers generate deeply personalized landing pages.

Enriching sales outreach and ABM – AI can integrate with Salesforce, HubSpot, and AB tools, allowing teams to enrich customer data to provide more personalized or accurate information about leads to your marketing tools.

How can you use generative AI to improve customer service and customer experience? 

Advanced chatbots – AI can enable you to engage with customer queries 24/7 through chatbots. Eventually, talking with an AI-chatbot will be nearly indistinguishable from chatting with a customer service rep.

Automated testing of customer service processes – it’s really hard to test customer support systems and tools because if you want to test a chat feature, you have to go through many different responses very manually. You’ll eventually be able to use an AI that will mimic an upset or nefarious customer—it will complain, try to return things, etc., to test your customer experience and response.

Using AI APIs

What are the different ways you can customize a model for your application? When should you use each? 

If you want to get started quickly, use prompt engineering – prompt engineering is the simplest way to improve output. Getting good at prompts and sharing them is probably the most important thing to do this early in the game. Take advantage of what other people are learning and get good at asking GPT smart questions. 

Tip: Develop prompt recording practices using tools to capture and organize your prompts will help you get better answers and learn over time. You’ll also need to backtest prompts because sometimes you’ll realize you wrote something yesterday that got x, but it says y today because there was an update to the model.

If you have a lot of the same type of data, invest in fine-tuning – it makes sense to train a model on data from your specific context if you’re going to have a particularly substantial use case. If you want to effectively teach a model something new, fine-tuning with your own data is the way to do it.

Example:  you have a large number (like hundreds) of examples of data that contains the prompt, text and expected output and you want GPT to mimic those then fine-tuning can be a good approach.

Note:  GPT-4 currently does not support fine-tuning.

If your team does the same thing over and over, embed GPT into your tools – if your team is doing something over and over, embed AI into your software for purpose-build use cases, using the GPT API.  There are also a growing number of tools that assist with this process, like Scale.ai’s Spellbook.

How can you improve accuracy in AI responses? 

Add a human in the loop to provide feedback – “human in the loop” means the model will create something and a live person will approve or disapprove the output—their approval becomes part of the feedback loop that trains the model over time. If you take this route, you’ll probably include multiple people in the loop or have multiple loops with different people so that there’s consensus when something is amiss in a response. 

Lower temperature settings – temperature is a measure of how much creativity there is in the response. In Midjourney, they call it chaos and it’s on a scale of 0-100. If you put the slider up to 99, the image you receive will include a lot of very weird stuff. For GPT, OpenAI has carefully considered the default temperature, and that’s likely ideal for many use cases. To ask the AI to be less creative, you can lower the temperature settings. Be particularly careful about raising the temperature, you might get really weird stuff. 

What are the most important pieces to get right? 

The companies that embrace it and use it well will be at a huge advantage – companies that embrace AI and figure out how to do well with it probably are going to find some unique benefits that we can’t imagine today. Early adopters will figure it out—AI is bigger than smartphones and maybe bigger than the computer itself. It’s a huge revolution towards interfacing with computers as we do with humans. 

What are common pitfalls? 

Taking an inflexible approach – generative AI is likely going to have some impact on the way you do business—you need to recognize that and be open to incorporating it. Alternatively, we’re in the early days of generative AI—use cases and capabilities are likely to change quickly, so be prepared to adapt with the technology. 

Nimble Gravity
Nimble Gravity

More Resources

Product & TechnologyProduct ManagementRevenue OperationsSales Ops

Want free guides?

We feature guides every month in our newsletter

Newsletter Sign Up