Anthropic launches Claude Pro for $20 per month


Claude Pro is Anthropic’s answer to ChatGPT Plus and promises more usage and early access to new features.

Anthropic, an AI startup founded by former OpenAI researchers, has introduced a paid subscription plan called Claude Pro for its Claude conversational AI service. The new plan offers higher usage limits, priority access during high-traffic periods, and early access to new features for $20 or £18 per month.

Fewer usage limits and new features

According to Anthropic, Claude Pro offers at least 5 times more usage than the free Claude service, with the ability to send many more messages over an 8-hour period.

Exact limits vary based on factors such as message length and attachments: Short conversations with about 200 English sentences could expect 100 messages per 8 hours, while uploading a copy of The Great Gatsby will get you about 20 messages because of the size of the conversation. Anthropic explains these limitations with the high amount of computation required.



The subscription also provides priority access when traffic is high, ensuring that subscribers can continue to use Claude even when demand spikes. In addition, subscribers get early access to new features as Anthropic continues to improve Claude.

A large context window is a cost driver

To make the best use of available processing power, Antrophic recommends opening a new chat for each topic so that the chatbot does not have to process the entire chat context for each answer.

For the same reason, it makes sense to ask multiple questions at once rather than answering each question individually when processing long documents. Also, the same document should not be uploaded multiple times in the same conversation.

From these remarks, it is clear that the unique selling point of Claude 2, the large context window of more than 100K tokens, is a cost driver. The advantage of a large context window is that the chatbot can access more content at the same time when generating text, potentially resulting in higher-quality responses.

The largest GPT-4 model, which is currently only available to select individuals, offers a context window of up to 32K tokens. The fact that it is not more widely deployed by OpenAI is probably due to its cost and high computational requirements.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top