-
Notifications
You must be signed in to change notification settings - Fork 7.3k
Description
Expected Behavior
Running the command based on a prompt failing AFTER it appears to have generated code and asking if we wanted to run the code now (which I answer no to) and whether it worked (which I answered uncertain to) and then I left the comments blank.
I expected it to output the src/ files, but it instead gave me the below error saying there was no module called rudderstack, which I've not seen that error before in previous gpt-engineer projecdts.
Current Behavior
Failed after leaving comments blank (see above).
Failure Information
I'm using the default, which appears to be gpt-4o
Failure Logs
There were no logs saved, but I have copy/pasted the entire error message below.
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\Users\jpink\OneDrive - MidAmerican Aerospace, │
│ Ltd\Dev\gpt-engineer.venv\Lib\site-packages\gpt_engineer\applications\cli\main.py:483 in main │
│ │
│ 480 │ │ │ files_dict = agent.init(prompt) │
│ 481 │ │ │ # collect user feedback if user consents │
│ 482 │ │ │ config = (code_gen_fn.name, execution_fn.name) │
│ ❱ 483 │ │ │ collect_and_send_human_review(prompt, model, temperature, config, memory) │
│ 484 │ │ │
│ 485 │ │ stage_uncommitted_to_git(path, files_dict, improve_mode) │
│ 486 │
│ │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ agent = <gpt_engineer.applications.cli.cli_agent.CliAgent object at 0x000001CD4EB2AF00> │ │
│ │ ai = <gpt_engineer.core.ai.AI object at 0x000001CD4E577020> │ │
│ │ azure_endpoint = '' │ │
│ │ clarify_mode = False │ │
│ │ config = ('gen_code', 'execute_entrypoint') │ │
│ │ debug = False │ │
│ │ entrypoint_prompt_file = '' │ │
│ │ execution_env = <gpt_engineer.core.default.disk_execution_env.DiskExecutionEnv object at │ │
│ │ 0x000001CD4EB2A660> │ │
│ │ files = <gpt_engineer.core.default.file_store.FileStore object at 0x000001CD4EB2A990> │ │
│ │ files_dict = { │ │
│ │ │ 'Structure': 'email_analyzer/\n│\n├── main.py\n├── email_processor.py\n├── │ │
│ │ llm_client.py\n├── confi'+150, │ │
│ │ │ 'application.': 'import tkinter as tk\nfrom tkinter import filedialog, │ │
│ │ messagebox\nfrom email_proce'+3312, │ │
│ │ │ 'processing.': 'import extract_msg\nimport json\nimport re\n\nclass │ │
│ │ EmailProcessor:\n def extract_'+1755, │ │
│ │ │ 'API.': 'import openai\nfrom config_manager import ConfigManager\n\nclass │ │
│ │ LLMClient:\n def'+1315, │ │
│ │ │ 'settings.': 'import os\nfrom dotenv import load_dotenv\n\nclass ConfigManager:\n │ │
│ │ def init('+574, │ │
│ │ │ 'file.': 'API_KEY=your_xai_api_key_here\nBASE_URL=https://api.x.ai/v1', │ │
│ │ │ 'packages.': 'tk\nopenai\npython-dotenv\npandas\nextract-msg', │ │
│ │ │ 'run.sh': '#!/bin/bash\n\n# Step a: Install dependencies\npip install -r │ │
│ │ email_analyzer/requir'+72 │ │
│ │ } │ │
│ │ image_directory = '' │ │
│ │ improve_mode = False │ │
│ │ lite_mode = False │ │
│ │ llm_via_clipboard = False │ │
│ │ memory = <gpt_engineer.core.default.disk_memory.DiskMemory object at 0x000001CD4E5778C0> │ │
│ │ model = 'gpt-4o' │ │
│ │ no_execution = False │ │
│ │ path = WindowsPath('email_to_RFQ_GUI') │ │
│ │ preprompts_holder = <gpt_engineer.core.preprompts_holder.PrepromptsHolder object at 0x000001CD4EA7B4D0> │ │
│ │ project_path = '.\email_to_RFQ_GUI' │ │
│ │ prompt = Prompt(text='I want to create a small app with a simple, but attractive GUI that will │ │
│ │ take the body of an email via copy/paste or by uploading an outlook .msg file or .eml │ │
│ │ file and analyze it using an LLM API such as OpenAI or another that can use the openai │ │
│ │ library, such as grok. By default, it should use the gpt-4o-mini model. The emails │ │
│ │ will contain multiple items. Each item will have information such as a part number, a │ │
│ │ condition code (AR, SV, OH, NE, NS), as well possibly a description, quantity, price, │ │
│ │ and other some other various fields.\nThe app should have a separate config tab where │ │
│ │ it can save various other information in a secure way. By default when the program │ │
│ │ loads it will load settings from that file, such as the API key and model name that we │ │
│ │ want to use.\nPlease ensure that a library is included that can read the outlook .msg │ │
│ │ file.\nIn order to potentially use other APIs, please include code similar to the │ │
│ │ following:\nclient = openai.OpenAI(\n api_key="your_xai_api_key_here", # Replace │ │
│ │ with your xAI API key\n base_url="https://api.x.ai/v1" # xAI's API │ │
│ │ endpoint\n)\nThe API key and the base_url should be stored in a separate .env or config │ │
│ │ file.\nWhen the user submits the file or the email body, it will submit the text using │ │
│ │ the openai library to the selected LLM API. The prompt will be the following: Extract │ │
│ │ the following information from the email text and return it in a structured JSON │ │
│ │ format. The email contains quotes for aerospace parts. For each item, extract:\n1. PN │ │
│ │ (Part Number)\n2. Description\n3. Tag Information\n4. Condition Code\n5. Tag Date\n6. │ │
│ │ Price\n\nHere is the email text:\n\n\nReturn the data in the │ │
│ │ following JSON format:\n{\n "items": [\n {\n "PN": "9008000-10001",\n │ │
│ │ "Description": "Data Link Transponder (757)",\n "Tag Information": "American │ │
│ │ Airlines - ACSS Repair & Overhaul",\n "Condition Code": "IN",\n "Tag Date": │ │
│ │ "September 2024",\n "Price": "16000"\n },\n ...\n ]\n}\n The LLM will │ │
│ │ return the information in JSON format, which this program will use export the data in a │ │
│ │ CSV file with the following columns: Vendor Reference,Part Number,Alternate Part │ │
│ │ Number,Keyword,Part Description,Serialized,UOM,Condition Code,Quantity Requested,Type │ │
│ │ Requested,Quoted Date,PN Quoted,Condition Quoted,Qty Quoted,Type Quoted,Unit │ │
│ │ Price,Delivery,Validity,Serial Number,Tagged By,Tag Date,Trace,Internal │ │
│ │ Comments,External Comments\nThe Vendor Reference column will be populated with the │ │
│ │ domain name of the sender's email if it is in the email as well as the date of the │ │
│ │ email if available, or current date if not. \nSerialized column will have text 'Y' in │ │
│ │ every field. \nQuantity Requested and Quantity Quoted columns will both be 1 unless the │ │
│ │ quantity was specified in the email/text submitted, in which case use the extracted │ │
│ │ number.\nPart Description and Keyword will be extracted from the email, and will be the │ │
│ │ same. This value will typically be listed immediately after the Part Number.\nPart │ │
│ │ Number will be extracted from the supplied text, and will typically be between 4 and 16 │ │
│ │ characters long containing alphanumeric characters as well as possible the characters - │ │
│ │ / \ or .\nCondition Code and Condition Quote will typically consist of 2 or 3 │ │
│ │ characters, most frequently AR, SV, OH, NE, NS, or REP. These fields will be the │ │
│ │ same.\nType Requested and Type Quoted column will always be "Outright".\nTagged By │ │
│ │ column will be the be some text that represents a company and a date (one or the other │ │
│ │ or both). It will typically be near or at the end of the each extracted entry. \nUOM │ │
│ │ will always contain the text "EA"\nSerial Number will be blank unless the LLM is able │ │
│ │ to identify what looks like a serial number in the quote text.\nQuoted Date will be the │ │
│ │ date of the email or the current date that the text is processed if not available in │ │
│ │ the email.\nTag Date, Trace, Internal Comments, and External Comments can remain │ │
│ │ blank.\nThe resulting CSV file should be named "RFQ_[YYYY-MM-DD].csv" where YYYY-MM-DD │ │
│ │ is the date, but should open a prompt to save the file using a standard OS file save │ │
│ │ dialogue box.\nI've included 3 example files, RFQ_template.csv, WYATT_1.msg, and │ │
│ │ WYATT_2.msg to start training the LLM.\nPlease implement some logic to extract the text │ │
│ │ from the .msg file using an appropriate library to read the Microsoft Outlook .eml │ │
│ │ file.\n', image_urls=None) │ │
│ │ prompt_file = 'prompt' │ │
│ │ self_heal_mode = False │ │
│ │ temperature = 0.1 │ │
│ │ use_cache = False │ │
│ │ use_custom_preprompts = False │ │
│ │ verbose = False │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ C:\Users\jpink\OneDrive - MidAmerican Aerospace, │
│ Ltd\Dev\gpt-engineer.venv\Lib\site-packages\gpt_engineer\applications\cli\collect.py:177 in │
│ collect_and_send_human_review │
│ │
│ 174 │ │
│ 175 │ review = human_review_input() │
│ 176 │ if review: │
│ ❱ 177 │ │ collect_learnings(prompt, model, temperature, config, memory, review) │
│ 178 │
│ │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ config = ('gen_code', 'execute_entrypoint') │ │
│ │ memory = <gpt_engineer.core.default.disk_memory.DiskMemory object at 0x000001CD4E5778C0> │ │
│ │ model = 'gpt-4o' │ │
│ │ prompt = Prompt(text='I want to create a small app with a simple, but attractive GUI that will take the │ │
│ │ body of an email via copy/paste or by uploading an outlook .msg file or .eml file and analyze it │ │
│ │ using an LLM API such as OpenAI or another that can use the openai library, such as grok. By │ │
│ │ default, it should use the gpt-4o-mini model. The emails will contain multiple items. Each item │ │
│ │ will have information such as a part number, a condition code (AR, SV, OH, NE, NS), as well │ │
│ │ possibly a description, quantity, price, and other some other various fields.\nThe app should have │ │
│ │ a separate config tab where it can save various other information in a secure way. By default when │ │
│ │ the program loads it will load settings from that file, such as the API key and model name that we │ │
│ │ want to use.\nPlease ensure that a library is included that can read the outlook .msg file.\nIn │ │
│ │ order to potentially use other APIs, please include code similar to the following:\nclient = │ │
│ │ openai.OpenAI(\n api_key="your_xai_api_key_here", # Replace with your xAI API key\n │ │
│ │ base_url="https://api.x.ai/v1" # xAI's API endpoint\n)\nThe API key and the base_url should │ │
│ │ be stored in a separate .env or config file.\nWhen the user submits the file or the email body, it │ │
│ │ will submit the text using the openai library to the selected LLM API. The prompt will be the │ │
│ │ following: Extract the following information from the email text and return it in a structured │ │
│ │ JSON format. The email contains quotes for aerospace parts. For each item, extract:\n1. PN (Part │ │
│ │ Number)\n2. Description\n3. Tag Information\n4. Condition Code\n5. Tag Date\n6. Price\n\nHere is │ │
│ │ the email text:\n\n\nReturn the data in the following JSON format:\n{\n │ │
│ │ "items": [\n {\n "PN": "9008000-10001",\n "Description": "Data Link Transponder │ │
│ │ (757)",\n "Tag Information": "American Airlines - ACSS Repair & Overhaul",\n "Condition │ │
│ │ Code": "IN",\n "Tag Date": "September 2024",\n "Price": "16000"\n },\n ...\n │ │
│ │ ]\n}\n The LLM will return the information in JSON format, which this program will use export the │ │
│ │ data in a CSV file with the following columns: Vendor Reference,Part Number,Alternate Part │ │
│ │ Number,Keyword,Part Description,Serialized,UOM,Condition Code,Quantity Requested,Type │ │
│ │ Requested,Quoted Date,PN Quoted,Condition Quoted,Qty Quoted,Type Quoted,Unit │ │
│ │ Price,Delivery,Validity,Serial Number,Tagged By,Tag Date,Trace,Internal Comments,External │ │
│ │ Comments\nThe Vendor Reference column will be populated with the domain name of the sender's │ │
│ │ email if it is in the email as well as the date of the email if available, or current date if not. │ │
│ │ \nSerialized column will have text 'Y' in every field. \nQuantity Requested and Quantity Quoted │ │
│ │ columns will both be 1 unless the quantity was specified in the email/text submitted, in which │ │
│ │ case use the extracted number.\nPart Description and Keyword will be extracted from the email, and │ │
│ │ will be the same. This value will typically be listed immediately after the Part Number.\nPart │ │
│ │ Number will be extracted from the supplied text, and will typically be between 4 and 16 characters │ │
│ │ long containing alphanumeric characters as well as possible the characters - / \ or .\nCondition │ │
│ │ Code and Condition Quote will typically consist of 2 or 3 characters, most frequently AR, SV, OH, │ │
│ │ NE, NS, or REP. These fields will be the same.\nType Requested and Type Quoted column will always │ │
│ │ be "Outright".\nTagged By column will be the be some text that represents a company and a date │ │
│ │ (one or the other or both). It will typically be near or at the end of the each extracted entry. │ │
│ │ \nUOM will always contain the text "EA"\nSerial Number will be blank unless the LLM is able to │ │
│ │ identify what looks like a serial number in the quote text.\nQuoted Date will be the date of the │ │
│ │ email or the current date that the text is processed if not available in the email.\nTag Date, │ │
│ │ Trace, Internal Comments, and External Comments can remain blank.\nThe resulting CSV file should │ │
│ │ be named "RFQ_[YYYY-MM-DD].csv" where YYYY-MM-DD is the date, but should open a prompt to save the │ │
│ │ file using a standard OS file save dialogue box.\nI've included 3 example files, │ │
│ │ RFQ_template.csv, WYATT_1.msg, and WYATT_2.msg to start training the LLM.\nPlease implement some │ │
│ │ logic to extract the text from the .msg file using an appropriate library to read the Microsoft │ │
│ │ Outlook .eml file.\n', image_urls=None) │ │
│ │ review = Review(ran=None, perfect=None, works=None, comments='', raw='u, , ') │ │
│ │ temperature = 0.1 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ C:\Users\jpink\OneDrive - MidAmerican Aerospace, │
│ Ltd\Dev\gpt-engineer.venv\Lib\site-packages\gpt_engineer\applications\cli\collect.py:98 in collect_learnings │
│ │
│ 95 │ """ │
│ 96 │ learnings = extract_learning(prompt, model, temperature, config, memory, review) │
│ 97 │ try: │
│ ❱ 98 │ │ send_learning(learnings) │
│ 99 │ except RuntimeError: │
│ 100 │ │ # try to remove some parts of learning that might be too big │
│ 101 │ │ # rudderstack max event size is 32kb │
│ │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ config = ('gen_code', 'execute_entrypoint') │ │
│ │ learnings = Learning( │ │
│ │ │ prompt='{"text": "I want to create a small app with a simple, but attractive GUI that │ │
│ │ wi'+4398, │ │
│ │ │ model='gpt-4o', │ │
│ │ │ temperature=0.1, │ │
│ │ │ config='["gen_code", "execute_entrypoint"]', │ │
│ │ │ logs='{"logs\\all_output.txt": │ │
│ │ "\n2025-02-24T14:05:04.922246\n========================'+27072, │ │
│ │ │ session='1653593137', │ │
│ │ │ review=Review(ran=None, perfect=None, works=None, comments='', raw='u, , '), │ │
│ │ │ timestamp='2025-02-24T20:05:13.772186', │ │
│ │ │ version='0.3' │ │
│ │ ) │ │
│ │ memory = <gpt_engineer.core.default.disk_memory.DiskMemory object at 0x000001CD4E5778C0> │ │
│ │ model = 'gpt-4o' │ │
│ │ prompt = Prompt(text='I want to create a small app with a simple, but attractive GUI that will take the │ │
│ │ body of an email via copy/paste or by uploading an outlook .msg file or .eml file and analyze it │ │
│ │ using an LLM API such as OpenAI or another that can use the openai library, such as grok. By │ │
│ │ default, it should use the gpt-4o-mini model. The emails will contain multiple items. Each item │ │
│ │ will have information such as a part number, a condition code (AR, SV, OH, NE, NS), as well │ │
│ │ possibly a description, quantity, price, and other some other various fields.\nThe app should have │ │
│ │ a separate config tab where it can save various other information in a secure way. By default when │ │
│ │ the program loads it will load settings from that file, such as the API key and model name that we │ │
│ │ want to use.\nPlease ensure that a library is included that can read the outlook .msg file.\nIn │ │
│ │ order to potentially use other APIs, please include code similar to the following:\nclient = │ │
│ │ openai.OpenAI(\n api_key="your_xai_api_key_here", # Replace with your xAI API key\n │ │
│ │ base_url="https://api.x.ai/v1" # xAI's API endpoint\n)\nThe API key and the base_url should │ │
│ │ be stored in a separate .env or config file.\nWhen the user submits the file or the email body, it │ │
│ │ will submit the text using the openai library to the selected LLM API. The prompt will be the │ │
│ │ following: Extract the following information from the email text and return it in a structured │ │
│ │ JSON format. The email contains quotes for aerospace parts. For each item, extract:\n1. PN (Part │ │
│ │ Number)\n2. Description\n3. Tag Information\n4. Condition Code\n5. Tag Date\n6. Price\n\nHere is │ │
│ │ the email text:\n\n\nReturn the data in the following JSON format:\n{\n │ │
│ │ "items": [\n {\n "PN": "9008000-10001",\n "Description": "Data Link Transponder │ │
│ │ (757)",\n "Tag Information": "American Airlines - ACSS Repair & Overhaul",\n "Condition │ │
│ │ Code": "IN",\n "Tag Date": "September 2024",\n "Price": "16000"\n },\n ...\n │ │
│ │ ]\n}\n The LLM will return the information in JSON format, which this program will use export the │ │
│ │ data in a CSV file with the following columns: Vendor Reference,Part Number,Alternate Part │ │
│ │ Number,Keyword,Part Description,Serialized,UOM,Condition Code,Quantity Requested,Type │ │
│ │ Requested,Quoted Date,PN Quoted,Condition Quoted,Qty Quoted,Type Quoted,Unit │ │
│ │ Price,Delivery,Validity,Serial Number,Tagged By,Tag Date,Trace,Internal Comments,External │ │
│ │ Comments\nThe Vendor Reference column will be populated with the domain name of the sender's │ │
│ │ email if it is in the email as well as the date of the email if available, or current date if not. │ │
│ │ \nSerialized column will have text 'Y' in every field. \nQuantity Requested and Quantity Quoted │ │
│ │ columns will both be 1 unless the quantity was specified in the email/text submitted, in which │ │
│ │ case use the extracted number.\nPart Description and Keyword will be extracted from the email, and │ │
│ │ will be the same. This value will typically be listed immediately after the Part Number.\nPart │ │
│ │ Number will be extracted from the supplied text, and will typically be between 4 and 16 characters │ │
│ │ long containing alphanumeric characters as well as possible the characters - / \ or .\nCondition │ │
│ │ Code and Condition Quote will typically consist of 2 or 3 characters, most frequently AR, SV, OH, │ │
│ │ NE, NS, or REP. These fields will be the same.\nType Requested and Type Quoted column will always │ │
│ │ be "Outright".\nTagged By column will be the be some text that represents a company and a date │ │
│ │ (one or the other or both). It will typically be near or at the end of the each extracted entry. │ │
│ │ \nUOM will always contain the text "EA"\nSerial Number will be blank unless the LLM is able to │ │
│ │ identify what looks like a serial number in the quote text.\nQuoted Date will be the date of the │ │
│ │ email or the current date that the text is processed if not available in the email.\nTag Date, │ │
│ │ Trace, Internal Comments, and External Comments can remain blank.\nThe resulting CSV file should │ │
│ │ be named "RFQ_[YYYY-MM-DD].csv" where YYYY-MM-DD is the date, but should open a prompt to save the │ │
│ │ file using a standard OS file save dialogue box.\nI've included 3 example files, │ │
│ │ RFQ_template.csv, WYATT_1.msg, and WYATT_2.msg to start training the LLM.\nPlease implement some │ │
│ │ logic to extract the text from the .msg file using an appropriate library to read the Microsoft │ │
│ │ Outlook .eml file.\n', image_urls=None) │ │
│ │ review = Review(ran=None, perfect=None, works=None, comments='', raw='u, , ') │ │
│ │ temperature = 0.1 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ C:\Users\jpink\OneDrive - MidAmerican Aerospace, │
│ Ltd\Dev\gpt-engineer.venv\Lib\site-packages\gpt_engineer\applications\cli\collect.py:53 in send_learning │
│ │
│ 50 │ improving gpt-engineer, and letting it handle more use cases. │
│ 51 │ Consent logic is in gpt_engineer/learning.py. │
│ 52 │ """ │
│ ❱ 53 │ import rudderstack.analytics as rudder_analytics │
│ 54 │ │
│ 55 │ rudder_analytics.write_key = "2Re4kqwL61GDp7S8ewe6K5dbogG" │
│ 56 │ rudder_analytics.dataPlaneUrl = "https://gptengineerezm.dataplane.rudderstack.com" │
│ │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ learning = Learning( │ │
│ │ │ prompt='{"text": "I want to create a small app with a simple, but attractive GUI that wi'+4398, │ │
│ │ │ model='gpt-4o', │ │
│ │ │ temperature=0.1, │ │
│ │ │ config='["gen_code", "execute_entrypoint"]', │ │
│ │ │ logs='{"logs\\all_output.txt": │ │
│ │ "\n2025-02-24T14:05:04.922246\n========================'+27072, │ │
│ │ │ session='1653593137', │ │
│ │ │ review=Review(ran=None, perfect=None, works=None, comments='', raw='u, , '), │ │
│ │ │ timestamp='2025-02-24T20:05:13.772186', │ │
│ │ │ version='0.3' │ │
│ │ ) │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'rudderstack'
System Information
\Dev\gpt-engineer> uv run gpte --sysinfo
Usage: gpte [OPTIONS] [PROJECT_PATH]
Try 'gpte -h' for help.
╭─ Error ──────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ No such option: --sysinfo │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Installation Method
I installed via pip install.