
gittech. site
for different kinds of informations and explorations.
Flashlearn – Use LLMs with a simple fit/predict flow
Flash Learn - Agents made simple
FlashLearn provides a simple interface and orchestration (up to 1000 calls/min) for incorporating Agent LLMs into your typical workflows and ETL pipelines. Conduct data transformations, classifications, summarizations, rewriting, and custom multi-step tasks, just like you’d do with any standard ML library, harnessing the power of LLMs under the hood. Each step and task has a compact JSON definition which makes pipelines simple to understand and maintain. It supports LiteLLM, Ollama, OpenAI, DeepSeek, and all other OpenAI-compatible clients.
🚀 Examples
📖 Full Documentation
Installation
pip install flashlearn
Add the API keys for the provider you want to use to your .env file.
OPENAI_API_KEY=
High-Level Concept Flow
flowchart TB
classDef smallBox font-size:12px, padding:0px;
H[Your Data] --> I[Load Skill / Learn Skill]
I --> J[Create Tasks]
J --> K[Run Tasks]
K --> L[Structured Results]
L --> M[Downstream Steps]
class H,I,J,K,L,M smallBox;
Learning a New “Skill”
Like a fit/predict pattern, you can quickly “learn” a custom skill. Below, we’ll create a skill that evaluates the likelihood of buying a product from user comments on social media posts, returning a score (1–100) and a short reason. We’ll instruct the LLM to transform each comment according to our custom specifications.
from flashlearn.skills.learn_skill import LearnSkill
from openai import OpenAI
# Instantiate your pipeline “estimator” or “transformer”
learner = LearnSkill(model_name="gpt-4o-mini", client=OpenAI())
# Provide instructions and sample data for the new skill
skill = learner.learn_skill(
df=[], #dif you want you can also pass data sample
task=(
"Evaluate how likely the user is to buy my product based on the sentiment in their comment, "
"return an integer 1-100 on key 'likely_to_buy', "
"and a short explanation on key 'reason'."
),
)
# Save skill to be used from any system
skill.save("evaluate_buy_comments_skill.json")
Input Is a List of Dictionaries
Whether you retrieved data from an API, a spreadsheet, or user-submitted forms, you can simply wrap each record into a dictionary. FlashLearn’s “skills” accept a list of such dictionaries, as shown below:
user_inputs = [
{"comment_text": "I love this product, it's everything I wanted!"},
{"comment_text": "Not impressed... wouldn't consider buying this."},
# ...
]
Run in 3 Lines of Code
Once you’ve defined or learned a skill, you can load it as though it were a specialized transformer in a standard ML pipeline. Then apply it to your data in just a few lines:
from flashlearn.skills.general_skill import GeneralSkill
with open("evaluate_buy_comments_skill.json", "r", encoding="utf-8") as file:
definition= json.load(file)
# Suppose we previously saved a learned skill to "evaluate_buy_comments_skill.json".
skill = GeneralSkill.load_skill(definition)
tasks = skill.create_tasks(user_inputs)
results = skill.run_tasks_in_parallel(tasks)
print(results)
Get Structured Results
FlashLearn returns structured outputs for each of your records. The keys in the results dictionary map to the indexes of your original list. For example:
{
"0": {
"likely_to_buy": 90,
"reason": "Comment shows strong enthusiasm and positive sentiment."
},
"1": {
"likely_to_buy": 25,
"reason": "Expressed disappointment and reluctance to purchase."
}
}
Pass on to Next Steps
Each record’s output can then be used in downstream tasks. For instance, you might:
- Store the results in a database
- Filter for high-likelihood leads
- Send them to another tool for further analysis (for example, rewriting the “reason” in a formal tone)
Below is a small example showing how you might parse the dictionary and feed it into a separate function:
# Suppose 'flash_results' is the dictionary with structured LLM outputs
for idx, result in flash_results.items():
desired_score = result["likely_to_buy"]
reason_text = result["reason"]
# Now do something with the score and reason, e.g., store in DB or pass to next step
print(f"Comment #{idx} => Score: {desired_score}, Reason: {reason_text}")
Supported LLM Providers
Anywhere you might rely on an ML pipeline component, you can swap in an LLM:
client = OpenAI() # This is equivalent to instantiating a pipeline component
deep_seek = OpenAI(api_key='YOUR DEEPSEEK API KEY', base_url="https://api.deepseek.com")
lite_llm = FlashLiteLLMClient() # LiteLLM integration Manages keys as environment variables, akin to a top-level pipeline manager
ollama = OpenAI(base_url = 'http://localhost:11434/v1', api_key='ollama', # required, but unused) # Just use ollama's openai client
KEY IDEA: JSON in, JSON out
Examples by use case
Customer service
Finance
Marketing
Personal assistant
Product intelligence
Sales
Software development
--> Full Documentation
Customization
“All JSON, All the Time”: Example Classification Workflow
The following example classifies IMDB movie reviews into “positive” or “negative” sentiment. Notice how at each step you can view, store, or chain the partial results—always in JSON format.
from flashlearn.utils import imdb_reviews_50k
from flashlearn.skills import GeneralSkill
from flashlearn.skills.toolkit import ClassifyReviewSentiment
import json
import os
def main():
os.environ["OPENAI_API_KEY"] = "API-KEY"
# Step 1: Load or generate your data
data = imdb_reviews_50k(sample=100) # 100 sample reviews
# Step 2: Load JSON definition of skill in dict format
skill = GeneralSkill.load_skill(ClassifyReviewSentiment)
# Step 3: Save skill definition in JSON for later loading
# skill.save("BinaryClassificationSkill.json")
# Step 5: Convert data rows into JSON tasks
tasks = skill.create_tasks(data)
# Step 6: Save results to a JSONL file and run now or later
with open('tasks.jsonl', 'w') as jsonl_file:
for entry in tasks:
jsonl_file.write(json.dumps(entry) + '\n')
# Step 7: Run tasks (in parallel by default)
results = skill.run_tasks_in_parallel(tasks)
# Step 8: Every output is strict JSON
# You can easily map results back to inputs
# e.g., store results as JSON Lines
with open('sentiment_results.jsonl', 'w') as f:
for task_id, output in results.items():
input_json = data[int(task_id)]
input_json['result'] = output
f.write(json.dumps(input_json) + '\n')
# Step 9: Inspect or chain the JSON results
print("Sample result:", results.get("0"))
if __name__ == "__main__":
main()
The output is consistently keyed by task ID ("0"
, "1"
, etc.), with the JSON content that your pipeline can parse or store with no guesswork.
“Skill” Is Just a Simple Dictionary
Internally, a skill is just a compact JSON-like object containing instructions and, optionally, function definitions to strictly validate LLM outputs. You can generate this skill from sample data (as shown above) or create it directly:
EvaluateToBuySkill = {
"skill_class": "GeneralSkill",
"system_prompt": "Evaluate how likely the user is to buy our product, returning an integer 1-100 and a short reason.",
"function_definition": {
"type": "function",
"function": {
"name": "EvaluateToBuySkill",
"description": "Assess user text with respect to willingness to buy a certain product.",
"strict": True,
"parameters": {
"type": "object",
"properties": {
"likely_to_buy": {
"type": "integer",
"description": "A number from 1 to 100 indicating how likely the user is to buy."
},
"reason": {
"type": "string",
"description": "A brief explanation of why this is the likely score."
}
},
"required": ["likely_to_buy", "reason"],
"additionalProperties": False
}
}
}
}
You can load or save this skill to JSON as needed, version it, share it, or plug it into your pipelines. FlashLearn makes the entire process—training, storing, loading, and using such custom LLM transformations—simple and uniform.
Single-Step Classification Using Prebuilt Skills
Classic classification tasks are as straightforward as calling “fit_predict” on a ML estimator:
import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
data = [{"message": "Where is my refund?"}, {"message": "My product was damaged!"}]
skill = ClassificationSkill(
model_name="gpt-4o-mini",
client=OpenAI(),
categories=["billing","product issue"],
system_prompt="Classify the request."
)
tasks = skill.create_tasks(data)
print(skill.run_tasks_in_parallel(tasks))
High-throughput
Process up to 999 tasks in 60 seconds on your local machine. For higher loads and enterprise needs contact us for enterprise solution.
Processing tasks in parallel: 100%|██████████| 999/999 [01:00<00:00, 16.38 it/s, In total: 368969, Out total: 17288]
INFO:ParallelProcessor:All tasks complete. 999 succeeded, 0 failed.
Parallel Execution:
run_tasks_in_parallel
organizes concurrent requests to the LLM.
Cost Estimation
Quickly preview your token usage:
cost_estimate = skill.estimate_tasks_cost(tasks)
print("Estimated cost:", cost_estimate)
Loading a Skill
Here’s how the library handles comedic rewrites:
from flashlearn.skills import GeneralSkill
from flashlearn.skills.toolkit import HumorizeText
def main():
data = [{"original_text": "We are behind schedule."}]
skill = GeneralSkill.load_skill(HumorizeText)
tasks = skill.create_tasks(data)
results = skill.run_tasks_in_parallel(tasks)
print(results)
You’ll see output like:
{
"0": {
"comedic_version": "Hilarious take on your sentence..."
}
}
Everything is well-structured JSON, suitable for further analysis.
Contributing & Community
- Licensed under MIT.
- Fork us to add new skills, fix bugs, or create new examples.
- We aim to make robust LLM workflows accessible to all startups.
- All code needs at least 95% test coverage
- Explore the examples folder for more advanced usage patterns.
License
MIT License.
Use it in commercial products, and personal projects.
“Hello World” Demos
Image Classification
import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
from flashlearn.utils import cats_and_dogs
def main():
# os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'
data = cats_and_dogs(sample=6)
skill = ClassificationSkill(
model_name="gpt-4o-mini",
client=OpenAI(),
categories=["cat", "dog"],
max_categories=1,
system_prompt="Classify what's in the picture."
)
column_modalities = {"image_base64": "image_base64"}
tasks = skill.create_tasks(data, column_modalities=column_modalities)
results = skill.run_tasks_in_parallel(tasks)
print(results)
# Save skill definition for reuse
skill.save("MyCustomSkillIMG.json")
if __name__ == "__main__":
main()
Text Classification
import json
import os
from openai import OpenAI
from flashlearn.skills.classification import ClassificationSkill
from flashlearn.utils import imdb_reviews_50k
def main():
# os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'
reviews = imdb_reviews_50k(sample=100)
skill = ClassificationSkill(
model_name="gpt-4o-mini",
client=OpenAI(),
categories=["positive", "negative"],
max_categories=1,
system_prompt="Classify short movie reviews by sentiment."
)
# Convert each row into a JSON-based task
tasks = skill.create_tasks([{'review': x['review']} for x in reviews])
results = skill.run_tasks_in_parallel(tasks)
# Compare predicted sentiment with ground truth for accuracy
correct = 0
for i, review in enumerate(reviews):
predicted = results[str(i)]['categories']
reviews[i]['predicted_sentiment'] = predicted
if review['sentiment'] == predicted:
correct += 1
print(f'Accuracy: {round(correct / len(reviews), 2)}')
# Store final results as JSON Lines
with open('results.jsonl', 'w') as jsonl_file:
for entry in reviews:
jsonl_file.write(json.dumps(entry) + '\n')
# Save the skill definition
skill.save("BinaryClassificationSkill.json")
if __name__ == "__main__":
main()
Final Words
FlashLearn brings clarity to LLM workflows by enforcing consistent JSON output at every step. Whether you run a single classification or a multi-step pipeline, you can store partial results, debug easily, and maintain confidence in your data.