Skip to content

Commit a170425

Browse files
authored
Merge pull request #1658 from endolith/classic/API_issues
Fix a bunch more typos in model documentation files
2 parents 98e8032 + 779f6c5 commit a170425

File tree

13 files changed

+15
-30
lines changed

13 files changed

+15
-30
lines changed

docs/guides/profiles.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ interpreter.loop = True
4444

4545
```YAML
4646
llm:
47-
model: "gpt-4-o"
47+
model: "gpt-4o"
4848
temperature: 0
4949
# api_key: ... # Your API key, if the API requires it
5050
# api_base: ... # The URL where an OpenAI-compatible server is running to handle LLM API requests

docs/language-models/hosted-models/anyscale.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ interpreter --model anyscale/<model-name>
1313
```python Python
1414
from interpreter import interpreter
1515

16-
# Set the model to use from AWS Bedrock:
16+
# Set the model to use from Anyscale:
1717
interpreter.llm.model = "anyscale/<model-name>"
1818
interpreter.chat()
1919
```
@@ -46,7 +46,6 @@ interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf"
4646
interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf"
4747
interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1"
4848
interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf"
49-
5049
```
5150

5251
</CodeGroup>

docs/language-models/hosted-models/aws-sagemaker.mdx

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,13 @@ We support the following completion models from AWS Sagemaker:
3737
<CodeGroup>
3838

3939
```bash Terminal
40-
4140
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
4241
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
4342
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
4443
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
4544
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
4645
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
47-
interpreter --model sagemaker/<your-hugginface-deployment-name>
46+
interpreter --model sagemaker/<your-huggingface-deployment-name>
4847
```
4948

5049
```python Python
@@ -54,7 +53,7 @@ interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
5453
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f"
5554
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b"
5655
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f"
57-
interpreter.llm.model = "sagemaker/<your-hugginface-deployment-name>"
56+
interpreter.llm.model = "sagemaker/<your-huggingface-deployment-name>"
5857
```
5958

6059
</CodeGroup>

docs/language-models/hosted-models/baseten.mdx

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,20 +30,15 @@ We support the following completion models from Baseten:
3030
<CodeGroup>
3131

3232
```bash Terminal
33-
3433
interpreter --model baseten/qvv0xeq
3534
interpreter --model baseten/q841o8w
3635
interpreter --model baseten/31dxrj3
37-
38-
3936
```
4037

4138
```python Python
4239
interpreter.llm.model = "baseten/qvv0xeq"
4340
interpreter.llm.model = "baseten/q841o8w"
4441
interpreter.llm.model = "baseten/31dxrj3"
45-
46-
4742
```
4843

4944
</CodeGroup>
@@ -54,4 +49,4 @@ Set the following environment variables [(click here to learn how)](https://chat
5449

5550
| Environment Variable | Description | Where to Find |
5651
| -------------------- | --------------- | -------------------------------------------------------------------------------------------------------- |
57-
| BASETEN_API_KEY'` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |
52+
| `BASETEN_API_KEY` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |

docs/language-models/hosted-models/cloudflare.mdx

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,20 +31,17 @@ We support the following completion models from Cloudflare Workers AI:
3131
<CodeGroup>
3232

3333
```bash Terminal
34-
3534
interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-fp16
3635
interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-int8
3736
interpreter --model @cf/mistral/mistral-7b-instruct-v0.1
3837
interpreter --model @hf/thebloke/codellama-7b-instruct-awq
39-
4038
```
4139

4240
```python Python
4341
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-fp16"
4442
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-int8"
4543
interpreter.llm.model = "@cf/mistral/mistral-7b-instruct-v0.1"
4644
interpreter.llm.model = "@hf/thebloke/codellama-7b-instruct-awq"
47-
4845
```
4946

5047
</CodeGroup>
@@ -55,5 +52,5 @@ Set the following environment variables [(click here to learn how)](https://chat
5552

5653
| Environment Variable | Description | Where to Find |
5754
| ----------------------- | -------------------------- | ---------------------------------------------------------------------------------------------- |
58-
| `CLOUDFLARE_API_KEY'` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
55+
| `CLOUDFLARE_API_KEY` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
5956
| `CLOUDFLARE_ACCOUNT_ID` | Your Cloudflare account ID | [Cloudflare Dashboard -> Grab the Account ID from the url like: https://dash.cloudflare.com/{CLOUDFLARE_ACCOUNT_ID}?account= ](https://dash.cloudflare.com/) |

docs/language-models/hosted-models/deepinfra.mdx

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,14 +33,12 @@ We support the following completion models from DeepInfra:
3333
<CodeGroup>
3434

3535
```bash Terminal
36-
3736
interpreter --model deepinfra/meta-llama/Llama-2-70b-chat-hf
3837
interpreter --model deepinfra/meta-llama/Llama-2-7b-chat-hf
3938
interpreter --model deepinfra/meta-llama/Llama-2-13b-chat-hf
4039
interpreter --model deepinfra/codellama/CodeLlama-34b-Instruct-hf
4140
interpreter --model deepinfra/mistral/mistral-7b-instruct-v0.1
4241
interpreter --model deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1
43-
4442
```
4543

4644
```python Python
@@ -50,7 +48,6 @@ interpreter.llm.model = "deepinfra/meta-llama/Llama-2-13b-chat-hf"
5048
interpreter.llm.model = "deepinfra/codellama/CodeLlama-34b-Instruct-hf"
5149
interpreter.llm.model = "deepinfra/mistral-7b-instruct-v0.1"
5250
interpreter.llm.model = "deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1"
53-
5451
```
5552

5653
</CodeGroup>
@@ -61,4 +58,4 @@ Set the following environment variables [(click here to learn how)](https://chat
6158

6259
| Environment Variable | Description | Where to Find |
6360
| -------------------- | ----------------- | ---------------------------------------------------------------------- |
64-
| `DEEPINFRA_API_KEY'` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |
61+
| `DEEPINFRA_API_KEY` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |

docs/language-models/hosted-models/gpt-4-setup.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ or
2727
3. **Add Environment Variable**: In the editor, add the line below, replacing `your-api-key-here` with your actual API key:
2828

2929
```
30-
export OPENAI\_API\_KEY='your-api-key-here'
30+
export OPENAI_API_KEY='your-api-key-here'
3131
```
3232

3333
4. **Save and Exit**: Press Ctrl+O to write the changes, followed by Ctrl+X to close the editor.
@@ -40,7 +40,7 @@ or
4040
2. **Set environment variable in the current session**: To set the environment variable in the current session, use the command below, replacing `your-api-key-here` with your actual API key:
4141

4242
```
43-
setx OPENAI\_API\_KEY "your-api-key-here"
43+
setx OPENAI_API_KEY "your-api-key-here"
4444
```
4545

4646
This command will set the OPENAI_API_KEY environment variable for the current session.

docs/language-models/hosted-models/mistral-api.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,6 @@ We support the following completion models from the Mistral API:
3030
<CodeGroup>
3131

3232
```bash Terminal
33-
3433
interpreter --model mistral/mistral-tiny
3534
interpreter --model mistral/mistral-small
3635
interpreter --model mistral/mistral-medium

docs/language-models/hosted-models/nlp-cloud.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ Set the following environment variables [(click here to learn how)](https://chat
2525

2626
| Environment Variable | Description | Where to Find |
2727
| -------------------- | ----------------- | ----------------------------------------------------------------- |
28-
| `NLP_CLOUD_API_KEY'` | NLP Cloud API key | [NLP Cloud Dashboard -> API KEY](https://nlpcloud.com/home/token) |
28+
| `NLP_CLOUD_API_KEY` | NLP Cloud API key | [NLP Cloud Dashboard -> API KEY](https://nlpcloud.com/home/token) |

docs/language-models/hosted-models/perplexity.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,6 @@ We support the following completion models from the Perplexity API:
3939
<CodeGroup>
4040

4141
```bash Terminal
42-
4342
interpreter --model perplexity/pplx-7b-chat
4443
interpreter --model perplexity/pplx-70b-chat
4544
interpreter --model perplexity/pplx-7b-online
@@ -77,4 +76,4 @@ Set the following environment variables [(click here to learn how)](https://chat
7776

7877
| Environment Variable | Description | Where to Find |
7978
| ----------------------- | ------------------------------------ | ----------------------------------------------------------------- |
80-
| `PERPLEXITYAI_API_KEY'` | The Perplexity API key from pplx-api | [Perplexity API Settings](https://www.perplexity.ai/settings/api) |
79+
| `PERPLEXITYAI_API_KEY` | The Perplexity API key from pplx-api | [Perplexity API Settings](https://www.perplexity.ai/settings/api) |

0 commit comments

Comments
 (0)