Selecting the right AI Model
Magicdoor enables you to use a selection of different AI models in one interface. In contrast to other multi-model platforms, Magicdoor is simple and priced based on your actual usage, with a much lower subscription fee.
Model Selection
By default, Magicdoor will use GPT-4o from OpenAI. This is the model you are probably most familiar with from ChatGPT. It supports text and image inputs, and is suitable for most general use cases. But the beauty of Magicdoor is that you can easily switch model with the selector under the input area.
Available models:
- GPT-4o: Image support ✓ | Web search ✓ | Image generation ✓ | Good at generating text and chatting
- Claude 3.5 Sonnet: Image support ✓ | Web search ✓ | Image generation ✓ | Particularly good at code and analysis
- GPT-4o Mini: Image support ✓ | Web search ✓ | Image generation ✓ | Super cheap, and solid at summarizing text
- Perplexity (Best): Image support ✗ | Web search ✓ | Image generation ✗ | Will search the web and provide sources
- Perplexity (Fast): Image support ✗ | Web search ✗ | Faster and cheaper, but less powerful
- GPT-o1-preview: Image support ✓ | Web search ✗ | OpenAI's state-of-the-art reasoning model
- GPT-o1-mini: Image support ✓ | Web search ✗ | A smaller and faster version of the reasoning model
The magic of switching model within a conversation
One of the less good things about Perplexity is that it's so optimized for search that it's not always the best at generating text. On the other hand, GPT-4o and Claude are the best at generating text, but not always the best at search.
With Magicdoor, you can use different models within one conversation depending on the task. For example, you can ask Perplexity to gather some information from the web, then use Claude or GPT-4o to talk about it, and then switch to a reasoning model to formulate an action plan.
Here are some simple recipes for how to combine different models:
- Search & Chat: Use Perplexity to find information, then use Claude or GPT-4o to chat about it.
- Chat with fact-checks: Use GPT-4o or Claude to chat, and Perplexity to fact-check with up-to-date information.
- Summarizing: Use GPT-4o Mini to summarize text.
- Planning: Use GPT-o1-preview to think through a problem, then follow up with GPT-4o or Claude to optimize and execute.
How to switch model
To switch model, simply click on the model name in the dropdown menu under the input area. When you start a new chat or load an existing chat from the sidebar, the model selector will default back to GPT-4o.
Managing cost
The two most important elements with regards to chat cost are tokens and which model you use. All models price their usage based on tokens. A token is simply the smallest unit of data that is processed by a model. It is not necessarily a word, but can be part of a word. The price of one messaage depends on the length of the prompt (prompt tokens) and the length of the reply (completion tokens).
Here is the very important part to understand: In order to have an ongoing chat conversation with an LLM, we have to send the entire conversation as the prompt. This means that the cost of a message depends very strongly on the length of the conversation, and increases exponentially with the number of messages.
The other key driver of cost is the model you use. One model has a higher cost per token than the other. To give you an idea, Claude 3.5 Sonnet is 100x more expensive than GPT-4o Mini.
- Learn more about how tokens work with LLMs in this article
- Find the exact cost per model with some examples in this article
Tips for managing costs
- Start a new chat every time you start a new subject. This make sure that you're not sending a ton of unrelated messages to the model unnecessarily.
- If a conversation is getting very long, consider asking the model to summarize the conversation so far, and use the summary as the prompt for a new chat.
- Uploading images uses a lot of tokens, keep that in mind if you're using a model that supports it.
- GPT-o1-preview is expensive, and can cost 10-20 cents even for short conversations. Use it wisely!