SceneProgLLM is a powerful and versatile Python package that wraps around LangChain's LLM interface to provide enhanced functionality, including support for text, code, JSON, list, pydantic, image, speech and embedding response formats along with image inputs. This project is built to support SceneProg projects and currently supports only OpenAI backend.
- Flexible Response Formats:
- Supports text, code, list, JSON, pydantic, image, speech and embedding outputs.
- Image Input and Output:
- Accepts image inputs and enables image generation through OpenAI's image generation API.
- System Template:
- Allows users to set a system description template containing placeholders which can be later filled with values.
To install the package and its dependencies, use the following command:
pip install sceneprogllmFor proper usage, create a .env file in the package root with following fields:
OPENAI_API_KEY=<Your OpenAI key>
Importing the Package
from sceneprogllm import LLM- Generating Text Responses
llm = LLM(response_format="text")
response = llm("What is the capital of France?")
print(response)
>> The capital of France is Paris.- Generating JSON Responses
llm = LLM(
response_format="json",
response_params=["capital":"str", "currency":"str"]
)
query = "What is capital and currency of India?"
response = llm(query)
print(response)
>> {'capital': 'New Delhi', 'currency': 'Indian Rupee'}- Generating List Responses
llm = LLM(
response_format="list",
)
query = "List G7 countries"
response = llm(query)
print(response)
>> ['Canada', 'France', 'Germany', 'Italy', 'Japan', 'United Kingdom', 'United States']- Generating Pydantic Responses
from pydantic import BaseModel, Field
class mypydantic(BaseModel):
country: str = Field(description="Name of the country")
capital: str = Field(description="Capital city of the country")
llm = LLM(
response_format="pydantic",
)
response = llm("What is the capital of France?", pydantic_object=mypydantic)
print(response)
>> country='France' capital='Paris'- Generating Python Code
llm = LLM(response_format="code")
query = "Write a Python function to calculate factorial of a number."
response = llm(query)
print(response)
>>
def factorial(n):
if n < 0:
raise ValueError("Factorial is not defined for negative numbers")
elif n == 0 or n == 1:
return 1
else:
result = 1
for i in range(2, n + 1):
result *= i
return result- Generating images from text
llm = LLM(response_format="image", response_params={"size":"1024x1536", "quality":"auto"})
response = llm("Generate an image of a futuristic cityscape.")
response.save("futuristic_city.png")
>> - Generating speech from text
llm = LLM(response_format="speech", response_params={"output_path":"speech.wav", "voice":"coral"})
speech_path = llm("This is how I speak!")
>> - Generating embeddings from text
llm = LLM(response_format="embedding")
response = llm(["Hello World", "I like you"])
>> - Query using Images
llm = LLM(response_format="json", response_params=["count":"int"])
image_paths = ["assets/lions.png"]
response = llm("How many lions are there in the image?", image_paths=image_paths)
print(response)
>> {'count': 6}- Generating images from text and image
llm = LLM(response_format="image", response_params={"size":"1024x1536", "quality":"auto"})
response = llm("Make the picture realistic", image_paths=["futuristic_city.png"])
response.save("real_futuristic_city.png")
>> - Set seed and temperature
llm = LLM(
seed=0,
temperature=1.0
)- Control behavious via system description
llm = LLM(
system_desc="You are a funny AI assistant",
)
response = llm("What is the capital of France")
print(response)
>>
Ah, the capital of France! That's Paris, the city of romance, lights, and baguettes longer than your arm! Just imagine the Eiffel Tower wearing a beret and saying, "Bonjour!"- Using Template
from sceneprogllm import LLM
llm = LLM(
system_desc="You are a helpful assistant. {description}",
)
response = llm("What is the capital of France?", system_desc_keys={"description": "You are a funny AI assistant"})
print(response)Please send your questions to k5gupta@ucsd.edu