import os
# os.environ['ANTHROPIC_LOG'] = 'debug'
Claudette’s source
This is the ‘literate’ source code for Claudette. You can view the fully rendered version of the notebook here, or you can clone the git repo and run the interactive notebook in Jupyter. The notebook is converted the Python module claudette/core.py using nbdev. The goal of this source code is to both create the Python module, and also to teach the reader how it is created, without assuming much existing knowledge about Claude’s API.
Most of the time you’ll see that we write some source code first, and then a description or discussion of it afterwards.
Setup
To print every HTTP request and response in full, uncomment the above line. This functionality is provided by Anthropic’s SDK.
If you’re reading the rendered version of this notebook, you’ll see an “Exported source” collapsible widget below. If you’re reading the source notebook directly, you’ll see #| exports
at the top of the cell. These show that this piece of code will be exported into the python module that this notebook creates. No other code will be included – any other code in this notebook is just for demonstration, documentation, and testing.
You can toggle expanding/collapsing the source code of all exported sections by using the </> Code
menu in the top right of the rendered notebook page.
Exported source
= {
model_types # Anthropic
'claude-3-opus-20240229': 'opus',
'claude-3-7-sonnet-20250219': 'sonnet',
'claude-3-5-sonnet-20241022': 'sonnet-3-5',
'claude-3-haiku-20240307': 'haiku-3',
'claude-3-5-haiku-20241022': 'haiku-3-5',
# AWS
'anthropic.claude-3-opus-20240229-v1:0': 'opus',
'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet',
'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet',
'anthropic.claude-3-haiku-20240307-v1:0': 'haiku',
# Google
'claude-3-opus@20240229': 'opus',
'claude-3-5-sonnet-v2@20241022': 'sonnet',
'claude-3-sonnet@20240229': 'sonnet',
'claude-3-haiku@20240307': 'haiku',
}
= list(model_types) all_models
Warning: between Anthropic SDK 0.4.2 and 0.4.7 the interface to the Model
type changed.
models
['claude-3-opus-20240229',
'claude-3-7-sonnet-20250219',
'claude-3-5-sonnet-20241022',
'claude-3-haiku-20240307',
'claude-3-5-haiku-20241022']
Exported source
= ('claude-3-5-haiku-20241022',) text_only_models
Exported source
= set(all_models)
has_streaming_models = set(all_models)
has_system_prompt_models = set(all_models)
has_temperature_models = {'claude-3-7-sonnet-20250219'} has_extended_thinking_models
has_streaming_models
{'anthropic.claude-3-5-sonnet-20241022-v2:0',
'anthropic.claude-3-haiku-20240307-v1:0',
'anthropic.claude-3-opus-20240229-v1:0',
'anthropic.claude-3-sonnet-20240229-v1:0',
'claude-3-5-haiku-20241022',
'claude-3-5-sonnet-20241022',
'claude-3-5-sonnet-v2@20241022',
'claude-3-7-sonnet-20250219',
'claude-3-haiku-20240307',
'claude-3-haiku@20240307',
'claude-3-opus-20240229',
'claude-3-opus@20240229',
'claude-3-sonnet@20240229'}
can_use_extended_thinking
can_use_extended_thinking (m)
Exported source
def can_stream(m): return m in has_streaming_models
def can_set_system_prompt(m): return m in has_system_prompt_models
def can_set_temperature(m): return m in has_temperature_models
def can_use_extended_thinking(m): return m in has_extended_thinking_models
can_set_temperature
can_set_temperature (m)
can_set_system_prompt
can_set_system_prompt (m)
can_stream
can_stream (m)
We include these functions to provide a uniform library interface with cosette since openai models such as o1 do not have many of these capabilities.
assert can_stream('claude-3-5-sonnet-20241022') and can_set_system_prompt('claude-3-5-sonnet-20241022') and can_set_temperature('claude-3-5-sonnet-20241022')
These are the current versions and prices of Anthropic’s models at the time of writing.
= models[1]; model model
'claude-3-7-sonnet-20250219'
For examples, we’ll use Sonnet 3.5, since it’s awesome.
Antropic SDK
= Anthropic() cli
This is what Anthropic’s SDK provides for interacting with Python. To use it, pass it a list of messages, with content and a role. The roles should alternate between user and assistant.
After the code below you’ll see an indented section with an orange vertical line on the left. This is used to show the result of running the code above. Because the code is running in a Jupyter Notebook, we don’t have to use print
to display results, we can just type the expression directly, as we do with r
here.
= {'role': 'user', 'content': "I'm Jeremy"}
m = cli.messages.create(messages=[m], model=model, max_tokens=100)
r r
Hello Jeremy! It’s nice to meet you. How are you doing today? Is there anything I can help you with or something you’d like to talk about?
- id:
msg_01YTFnV5W6dv72G56yVgETgc
- content:
[{'citations': None, 'text': "Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or something you'd like to talk about?", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 36, 'server_tool_use': None}
Formatting output
That output is pretty long and hard to read, so let’s clean it up. We’ll start by pulling out the Content
part of the message. To do that, we’re going to write our first function which will be included to the claudette/core.py
module.
This is the first exported public function or class we’re creating (the previous export was of a variable). In the rendered version of the notebook for these you’ll see 4 things, in this order (unless the symbol starts with a single _
, which indicates it’s private):
- The signature (with the symbol name as a heading, with a horizontal rule above)
- A table of paramater docs (if provided)
- The doc string (in italics).
- The source code (in a collapsible “Exported source” block)
After that, we generally provide a bit more detail on what we’ve created, and why, along with a sample usage.
find_block
find_block (r:collections.abc.Mapping, blk_type:type=<class 'anthropic.types.text_block.TextBlock'>)
Find the first block of type blk_type
in r.content
.
Type | Default | Details | |
---|---|---|---|
r | Mapping | The message to look in | |
blk_type | type | TextBlock | The type of block to find |
Exported source
def find_block(r:abc.Mapping, # The message to look in
type=TextBlock # The type of block to find
blk_type:
):"Find the first block of type `blk_type` in `r.content`."
return first(o for o in r.content if isinstance(o,blk_type))
This makes it easier to grab the needed parts of Claude’s responses, which can include multiple pieces of content. By default, we look for the first text block. That will generally have the content we want to display.
find_block(r)
TextBlock(citations=None, text="Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or something you'd like to talk about?", type='text')
def contents(r):
"Helper to get the contents from Claude response `r`."
= find_block(r)
blk if not blk and r.content: blk = r.content[0]
return blk.text.strip() if hasattr(blk,'text') else str(blk)
For display purposes, we often just want to show the text itself.
contents(r)
"Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or something you'd like to talk about?"
Exported source
@patch
def _repr_markdown_(self:(Message)):
= '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items())
det = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex
cts return f"""{cts}
<details>
- {det}
</details>"""
Jupyter looks for a _repr_markdown_
method in displayed objects; we add this in order to display just the content text, and collapse full details into a hideable section. Note that patch
is from fastcore, and is used to add (or replace) functionality in an existing class. We pass the class(es) that we want to patch as type annotations to self
. In this case, _repr_markdown_
is being added to Anthropic’s Message
class, so when we display the message now we just see the contents, and the details are hidden away in a collapsible details block.
r
Hello Jeremy! It’s nice to meet you. How are you doing today? Is there anything I can help you with or something you’d like to talk about?
- id:
msg_01YTFnV5W6dv72G56yVgETgc
- content:
[{'citations': None, 'text': "Hello Jeremy! It's nice to meet you. How are you doing today? Is there anything I can help you with or something you'd like to talk about?", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 36, 'server_tool_use': None}
One key part of the response is the usage
key, which tells us how many tokens we used by returning a Usage
object.
We’ll add some helpers to make things a bit cleaner for creating and formatting these objects.
r.usage
In: 10; Out: 36; Cache create: 0; Cache read: 0; Total Tokens: 46; Server tool use (web search requests): 0
server_tool_usage
server_tool_usage (web_search_requests=0)
Little helper to create a server tool usage object
Exported source
def server_tool_usage(web_search_requests=0):
'Little helper to create a server tool usage object'
return ServerToolUsage(web_search_requests=web_search_requests)
usage
usage (inp=0, out=0, cache_create=0, cache_read=0, server_tool_use=ServerToolUsage(web_search_requests=0))
Slightly more concise version of Usage
.
Type | Default | Details | |
---|---|---|---|
inp | int | 0 | input tokens |
out | int | 0 | Output tokens |
cache_create | int | 0 | Cache creation tokens |
cache_read | int | 0 | Cache read tokens |
server_tool_use | ServerToolUsage | ServerToolUsage(web_search_requests=0) | server tool use |
Exported source
def usage(inp=0, # input tokens
=0, # Output tokens
out=0, # Cache creation tokens
cache_create=0, # Cache read tokens
cache_read=server_tool_usage() # server tool use
server_tool_use
):'Slightly more concise version of `Usage`.'
return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create,
=cache_read, server_tool_use=server_tool_use) cache_read_input_tokens
The constructor provided by Anthropic is rather verbose, so we clean it up a bit, using a lowercase version of the name.
5) usage(
In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Server tool use (web search requests): 0
Usage.total
Usage.total ()
Exported source
def _dgetattr(o,s,d):
"Like getattr, but returns the default if the result is None"
return getattr(o,s,d) or d
@patch(as_prop=True)
def total(self:Usage): return self.input_tokens+self.output_tokens+_dgetattr(self, "cache_creation_input_tokens",0)+_dgetattr(self, "cache_read_input_tokens",0)
Adding a total
property to Usage
makes it easier to see how many tokens we’ve used up altogether.
5,1).total usage(
6
Usage.__repr__
Usage.__repr__ ()
Return repr(self).
Exported source
@patch
def __repr__(self:Usage):
= f'In: {self.input_tokens}; Out: {self.output_tokens}'
io_toks = f'Cache create: {_dgetattr(self, "cache_creation_input_tokens",0)}; Cache read: {_dgetattr(self, "cache_read_input_tokens",0)}'
cache_toks = _dgetattr(self, "server_tool_use",server_tool_usage())
server_tool_use = f'Server tool use (web search requests): {server_tool_use.web_search_requests}'
server_tool_use_str = f'Total Tokens: {self.total}'
total_tok return f'{io_toks}; {cache_toks}; {total_tok}; {server_tool_use_str}'
In python, patching __repr__
lets us change how an object is displayed. (More generally, methods starting and ending in __
in Python are called dunder
methods, and have some magic
behavior – such as, in this case, changing how an object is displayed.) We won’t be directly displaying ServerToolUsage’s, so we can handle its display behavior in the same Usage __repr__
5) usage(
In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Server tool use (web search requests): 0
ServerToolUsage.__add__
ServerToolUsage.__add__ (b)
Add together each of the server tool use counts
Exported source
@patch
def __add__(self:ServerToolUsage, b):
"Add together each of the server tool use counts"
return ServerToolUsage(web_search_requests=self.web_search_requests+b.web_search_requests)
And, patching __add__
lets +
work on a ServerToolUsage
as well as a Usage
object.
1) + server_tool_usage(2) server_tool_usage(
ServerToolUsage(web_search_requests=3)
Usage.__add__
Usage.__add__ (b)
Add together each of input_tokens
and output_tokens
Exported source
@patch
def __add__(self:Usage, b):
"Add together each of `input_tokens` and `output_tokens`"
return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens,
self,'cache_creation_input_tokens',0)+_dgetattr(b,'cache_creation_input_tokens',0),
_dgetattr(self,'cache_read_input_tokens',0)+_dgetattr(b,'cache_read_input_tokens',0),
_dgetattr(self,'server_tool_use',server_tool_usage())+_dgetattr(b,'server_tool_use',server_tool_usage())) _dgetattr(
+r.usage + usage(server_tool_use=server_tool_usage(1)) r.usage
In: 20; Out: 72; Cache create: 0; Cache read: 0; Total Tokens: 92; Server tool use (web search requests): 1
Creating messages
Creating correctly formatted dict
s from scratch every time isn’t very handy, so we’ll import a couple of helper functions from the msglm
library.
Let’s use mk_msg
to recreate our msg {'role': 'user', 'content': "I'm Jeremy"}
from earlier.
= "I'm Jeremy"
prompt = mk_msg(prompt)
m = cli.messages.create(messages=[m], model=model, max_tokens=100)
r r
Hello, Jeremy! It’s nice to meet you. How are you doing today? Is there something I can help you with or would you like to discuss something specific?
- id:
msg_01T5fCHX6KMXPe41wHJe3RvM
- content:
[{'citations': None, 'text': "Hello, Jeremy! It's nice to meet you. How are you doing today? Is there something I can help you with or would you like to discuss something specific?", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 37, 'server_tool_use': None}
We can pass more than just text messages to Claude. As we’ll see later we can also pass images, SDK objects, etc. To handle these different data types we need to pass the type along with our content to Claude.
Here’s an example of a multimodal message containing text and images.
{
'role': 'user',
'content': [
{'type':'text', 'text':'What is in the image?'},
{
'type':'image',
'source': {
'type':'base64', 'media_type':'media_type', 'data': 'data'
}
}
]
}
mk_msg
infers the type automatically and creates the appropriate data structure.
LLMs, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With Claude, they always alternate user and assistant. We’ll use mk_msgs
from msglm
to make it easier to build up these dialog lists.
= mk_msgs([prompt, r, "I forgot my name. Can you remind me please?"])
msgs msgs
[{'role': 'user', 'content': "I'm Jeremy"},
{'role': 'assistant',
'content': [TextBlock(citations=None, text="Hello, Jeremy! It's nice to meet you. How are you doing today? Is there something I can help you with or would you like to discuss something specific?", type='text')]},
{'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
=msgs, model=model, max_tokens=200) cli.messages.create(messages
You mentioned that your name is Jeremy. That’s how you introduced yourself at the beginning of our conversation.
- id:
msg_014Ryt7moN693dhH9WF6wU4q
- content:
[{'citations': None, 'text': "You mentioned that your name is Jeremy. That's how you introduced yourself at the beginning of our conversation.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 61, 'output_tokens': 24, 'server_tool_use': None}
Client
Client
Client (model, cli=None, log=False, cache=False)
Basic Anthropic messages client.
Exported source
class Client:
def __init__(self, model, cli=None, log=False, cache=False):
"Basic Anthropic messages client."
self.model,self.use = model,usage()
self.text_only = model in text_only_models
self.log = [] if log else None
self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
self.cache = cache
We’ll create a simple Client
for Anthropic
which tracks usage stores the model to use. We don’t add any methods right away – instead we’ll use patch
for that so we can add and document them incrementally.
= Client(model)
c c.use
In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0
Exported source
@patch
def _r(self:Client, r:Message, prefill=''):
"Store the result of the message and accrue total usage."
if prefill:
= find_block(r)
blk = prefill + (blk.text or '')
blk.text self.result = r
self.use += r.usage
self.stop_reason = r.stop_reason
self.stop_sequence = r.stop_sequence
return r
We use a _
prefix on private methods, but we document them here in the interests of literate source code.
_r
will be used each time we get a new result, to track usage and also to keep the result available for later.
c._r(r) c.use
In: 10; Out: 37; Cache create: 0; Cache read: 0; Total Tokens: 47; Server tool use (web search requests): 0
Whereas OpenAI’s models use a stream
parameter for streaming, Anthropic’s use a separate method. We implement Anthropic’s approach in a private method, and then use a stream
parameter in __call__
for consistency:
Exported source
@patch
def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs):
self._r(final, prefill)
if self.log is not None: self.log.append({
"msgs": msgs, "prefill": prefill, **kwargs,
"msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs,
"result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence
})return self.result
Exported source
@patch
def _stream(self:Client, msgs:list, prefill='', **kwargs):
with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs, cache=self.cache, cache_last_ckpt_only=self.cache), **kwargs) as s:
if prefill: yield(prefill)
yield from s.text_stream
self._log(s.get_final_message(), prefill, msgs, **kwargs)
Claude supports adding an extra assistant
message at the end, which contains the prefill – i.e. the text we want Claude to assume the response starts with. However Claude doesn’t actually repeat that in the response, so for convenience we add it.
Exported source
@patch
def _precall(self:Client, msgs, prefill, stop, kwargs):
= [prefill.strip()] if prefill else []
pref if not isinstance(msgs,list): msgs = [msgs]
if stop is not None:
if not isinstance(stop, (list)): stop = [stop]
"stop_sequences"] = stop
kwargs[= mk_msgs(msgs+pref, cache=self.cache, cache_last_ckpt_only=self.cache)
msgs return msgs
@patch
@delegates(messages.Messages.create)
def __call__(self:Client,
list, # List of messages in the dialog
msgs:='', # The system prompt
sp=0, # Temperature
temp=4096, # Maximum tokens
maxtok='', # Optional prefill to pass to Claude as start of its response
prefillbool=False, # Stream response?
stream:=None, # Stop sequence
stop**kwargs):
"Make a call to Claude."
= self._precall(msgs, prefill, stop, kwargs)
msgs if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
= self.c.messages.create(
res =self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
modelreturn self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs)
Defining __call__
let’s us use an object like a function (i.e it’s callable). We use it as a small wrapper over messages.create
. However we’re not exporting this version just yet – we have some additions we’ll make in a moment…
= Client(model, log=True)
c c.use
In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0
'Hi') c(
Hello! How can I assist you today? Feel free to ask any questions or let me know what you’d like to discuss.
- id:
msg_015UyBYhtQVzWatX7FXLhvDg
- content:
[{'citations': None, 'text': "Hello! How can I assist you today? Feel free to ask any questions or let me know what you'd like to discuss.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 8, 'output_tokens': 29, 'server_tool_use': None}
c.use
In: 8; Out: 29; Cache create: 0; Cache read: 0; Total Tokens: 37; Server tool use (web search requests): 0
Let’s try out prefill:
= "Concisely, what is the meaning of life?"
q = 'According to Douglas Adams,' pref
=pref) c(q, prefill
According to Douglas Adams, it’s 42. More seriously, the meaning of life is deeply personal and varies across philosophical traditions - from finding happiness, serving others, pursuing knowledge, or creating your own purpose in an inherently meaningless universe. There’s no universal answer, which is perhaps what makes the question so enduring.
- id:
msg_01V158u6gauy1e8TyDPKtpkb
- content:
[{'citations': None, 'text': "According to Douglas Adams, it's 42. More seriously, the meaning of life is deeply personal and varies across philosophical traditions - from finding happiness, serving others, pursuing knowledge, or creating your own purpose in an inherently meaningless universe. There's no universal answer, which is perhaps what makes the question so enduring.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 24, 'output_tokens': 65, 'server_tool_use': None}
We can pass stream=True
to stream the response back incrementally:
for o in c('Hi', stream=True): print(o, end='')
Hello! How can I assist you today? Feel free to ask any questions or let me know what you'd like to discuss.
c.use
In: 40; Out: 123; Cache create: 0; Cache read: 0; Total Tokens: 163; Server tool use (web search requests): 0
for o in c(q, prefill=pref, stream=True): print(o, end='')
According to Douglas Adams, it's 42. More seriously, the meaning of life is deeply personal and varies across philosophical traditions - from finding happiness, serving others, pursuing knowledge, or creating your own purpose in an inherently meaningless universe. There's no universal answer, which is perhaps what makes the question so enduring.
c.use
In: 64; Out: 188; Cache create: 0; Cache read: 0; Total Tokens: 252; Server tool use (web search requests): 0
Pass a stop seauence if you want claude to stop generating text when it encounters it.
"Count from 1 to 10", stop="5") c(
1, 2, 3, 4,
- id:
msg_013CBJrJQgR4ardtvmAxo3Xv
- content:
[{'citations': None, 'text': '1, 2, 3, 4, ', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
stop_sequence
- stop_sequence:
5
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 15, 'output_tokens': 14, 'server_tool_use': None}
This also works with streaming, and you can pass more than one stop sequence:
for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='')
print(c.stop_reason, c.stop_sequence)
1, stop_sequence 2
You can check the logs:
-1] c.log[
{'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}],
'prefill': '',
'max_tokens': 4096,
'system': '',
'temperature': 0,
'stop_sequences': ['2', 'yellow'],
'maxtok': None,
'sp': None,
'temp': None,
'stream': None,
'stop': None,
'result': Message(id='msg_01G2v8NthxnGMcVVjtE7VcBQ', content=[TextBlock(citations=None, text='1, ', type='text')], model='claude-3-7-sonnet-20250219', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 5; Cache create: 0; Cache read: 0; Total Tokens: 20; Server tool use (web search requests): 0),
'use': In: 94; Out: 207; Cache create: 0; Cache read: 0; Total Tokens: 301; Server tool use (web search requests): 0,
'stop_reason': 'stop_sequence',
'stop_sequence': '2'}
We’ve shown the token usage but we really care about is pricing. Let’s extract the latest pricing from Anthropic into a pricing
dict.
get_pricing
get_pricing (m, u)
Exported source
def get_pricing(m, u):
return pricing[m][:3] if u.prompt_token_count < 128_000 else pricing[m][3:]
Similarly, let’s get the pricing for the latest server tools:
We’ll patch Usage
to enable it compute the cost given pricing.
Usage.cost
Usage.cost (costs:tuple)
Exported source
@patch
def cost(self:Usage, costs:tuple) -> float:
= _dgetattr(self, "cache_creation_input_tokens",0), _dgetattr(self, "cache_read_input_tokens",0)
cache_w, cache_r = sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6
tok_cost = _dgetattr(self, "server_tool_use",server_tool_usage())
server_tool_use = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
server_tool_cost return tok_cost + server_tool_cost
Client.cost
Client.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Client) -> float: return self.use.cost(pricing[model_types[self.model]])
get_costs
get_costs (c)
Exported source
def get_costs(c):
= pricing[model_types[c.model]]
costs
= c.use.input_tokens * costs[0] / 1e6
inp_cost = c.use.output_tokens * costs[1] / 1e6
out_cost
= c.use.cache_creation_input_tokens
cache_w = c.use.cache_read_input_tokens
cache_r = (cache_w * costs[2] + cache_r * costs[3]) / 1e6
cache_cost
= c.use.server_tool_use
server_tool_use = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
server_tool_cost return inp_cost, out_cost, cache_cost, cache_w + cache_r, server_tool_cost
Exported source
@patch
def _repr_markdown_(self:Client):
if not hasattr(self,'result'): return 'No results yet'
= contents(self.result)
msg = get_costs(self)
inp_cost, out_cost, cache_cost, cached_toks, server_tool_cost return f"""{msg}
| Metric | Count | Cost (USD) |
|--------|------:|-----:|
| Input tokens | {self.use.input_tokens:,} | {inp_cost:.6f} |
| Output tokens | {self.use.output_tokens:,} | {out_cost:.6f} |
| Cache tokens | {cached_toks:,} | {cache_cost:.6f} |
| Server tool use | {self.use.server_tool_use.web_search_requests:,} | {server_tool_cost:.6f} |
| **Total** | **{self.use.total:,}** | **${self.cost:.6f}** |"""
c
1,
Metric | Count | Cost (USD) |
---|---|---|
Input tokens | 94 | 0.000282 |
Output tokens | 207 | 0.003105 |
Cache tokens | 0 | 0.000000 |
Server tool use | 0 | 0.000000 |
Total | 301 | $0.003387 |
Tool use
Let’s now add tool use (aka function calling).
mk_tool_choice
mk_tool_choice (choose:Union[str,bool,NoneType])
Create a tool_choice
dict that’s ‘auto’ if choose
is None
, ‘any’ if it is True, or ‘tool’ otherwise
print(mk_tool_choice('sums'))
print(mk_tool_choice(True))
print(mk_tool_choice(None))
{'type': 'tool', 'name': 'sums'}
{'type': 'any'}
{'type': 'auto'}
Claude can be forced to use a particular tool, or select from a specific list of tools, or decide for itself when to use a tool. If you want to force a tool (or force choosing from a list), include a tool_choice
param with a dict from mk_tool_choice
.
For testing, we need a function that Claude can call; we’ll write a simple function that adds numbers together, and will tell us when it’s being called:
def sums(
int, # First thing to sum
a:int=1 # Second thing to sum
b:-> int: # The sum of the inputs
) "Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
= 604542,6458932
a,b = f"What is {a}+{b}?"
pr = "You are a summing expert." sp
Claudette can autogenerate a schema thanks to the toolslm
library. We’ll force the use of the tool using the function we created earlier.
=[get_schema(sums)]
tools= mk_tool_choice('sums') choice
We’ll start a dialog with Claude now. We’ll store the messages of our dialog in msgs
. The first message will be our prompt pr
, and we’ll pass our tools
schema.
= mk_msgs(pr)
msgs = c(msgs, sp=sp, tools=tools, tool_choice=choice)
r r
ToolUseBlock(id=‘toolu_01CpF6zzMQztfisMkrQRkci1’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
- id:
msg_01C5m7hmZVisaFxCWKkgekKF
- content:
[{'id': 'toolu_01CpF6zzMQztfisMkrQRkci1', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
tool_use
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 442, 'output_tokens': 53, 'server_tool_use': None}
When Claude decides that it should use a tool, it passes back a ToolUseBlock
with the name of the tool to call, and the params to use.
We don’t want to allow it to call just any possible function (that would be a security disaster!) so we create a namespace – that is, a dictionary of allowable function names to call.
= mk_ns(sums)
ns ns
{'sums': <function __main__.sums(a: int, b: int = 1) -> int>}
mk_funcres
mk_funcres (fc, ns)
Given tool use block fc
, get tool result, and create a tool_result response.
Exported source
def mk_funcres(fc, ns):
"Given tool use block `fc`, get tool result, and create a tool_result response."
= call_func(fc.name, fc.input, ns=ns)
res return dict(type="tool_result", tool_use_id=fc.id, content=str(res))
We can now use the function requested by Claude. We look it up in ns
, and pass in the provided parameters.
= [o for o in r.content if isinstance(o,ToolUseBlock)]
fcs fcs
[ToolUseBlock(id='toolu_01CpF6zzMQztfisMkrQRkci1', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]
= [mk_funcres(fc, ns=ns) for fc in fcs]
res res
Finding the sum of 604542 and 6458932
[{'type': 'tool_result',
'tool_use_id': 'toolu_01CpF6zzMQztfisMkrQRkci1',
'content': '7063474'}]
def contents(r):
"Helper to get the contents from Claude response `r`."
= find_block(r)
blk if not blk and r.content: blk = r.content[0]
if hasattr(blk,'text'): return blk.text.strip()
elif hasattr(blk,'content'): return blk.content.strip()
return str(blk)
mk_toolres
mk_toolres (r:collections.abc.Mapping, ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)
Create a tool_result
message from response r
.
Type | Default | Details | |
---|---|---|---|
r | Mapping | Tool use request response from Claude | |
ns | Optional | None | Namespace to search for tools |
obj | Optional | None | Class to search for tools |
Exported source
def mk_toolres(
# Tool use request response from Claude
r:abc.Mapping, =None, # Namespace to search for tools
ns:Optional[abc.Mapping]=None # Class to search for tools
obj:Optional
):"Create a `tool_result` message from response `r`."
= getattr(r, 'content', [])
cts = [mk_msg(r.model_dump(), role='assistant')]
res if ns is None: ns=globals()
if obj is not None: ns = mk_ns(obj)
= [mk_funcres(o, ns) for o in cts if isinstance(o,ToolUseBlock)]
tcs if tcs: res.append(mk_msg(tcs))
return res
In order to tell Claude the result of the tool call, we pass back the tool use assistant request and the tool_result
response.
= mk_toolres(r, ns=ns)
tr tr
Finding the sum of 604542 and 6458932
[{'role': 'assistant',
'content': [{'id': 'toolu_01CpF6zzMQztfisMkrQRkci1',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_01CpF6zzMQztfisMkrQRkci1',
'content': '7063474'}]}]
msgs
[{'role': 'user', 'content': 'What is 604542+6458932?'}]
We add this to our dialog, and now Claude has all the information it needs to answer our question.
+= tr
msgs =sp, tools=tools)) contents(c(msgs, sp
'The sum of 604542 and 6458932 is 7,063,474.'
-1]) contents(msgs[
'7063474'
msgs
[{'role': 'user', 'content': 'What is 604542+6458932?'},
{'role': 'assistant',
'content': [{'id': 'toolu_01CpF6zzMQztfisMkrQRkci1',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_01CpF6zzMQztfisMkrQRkci1',
'content': '7063474'}]}]
This works with methods as well – in this case, use the object itself for ns
:
class Dummy:
def sums(
self,
int, # First thing to sum
a:int=1 # Second thing to sum
b:-> int: # The sum of the inputs
) "Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
= [get_schema(Dummy.sums)]
tools = Dummy()
o = c(pr, sp=sp, tools=tools, tool_choice=choice)
r = mk_toolres(r, obj=o)
tr += tr
msgs =sp, tools=tools)) contents(c(msgs, sp
Finding the sum of 604542 and 6458932
'The sum of 604542 and 6458932 is 7063474.'
Anthropic, also has a special tool type specific to text editing.
= [text_editor_conf['sonnet']]
tools tools
[{'type': 'text_editor_20250124', 'name': 'str_replace_editor'}]
= 'Could you please explain my _quarto.yml file?'
pr = [mk_msg(pr)]
msgs = c(msgs, sp=sp, tools=tools)
r find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_01ArY8DQsYsyNF3tRNnXWWWJ', input={'command': 'view', 'path': '_quarto.yml'}, name='str_replace_editor', type='tool_use')
We’ve gone ahead and create a reference implementation that you can directly use from our text_editor
module. Or use as reference for creating your own.
= mk_ns(str_replace_editor)
ns = mk_toolres(r, ns=ns)
tr += tr
msgs print(contents(c(msgs, sp=sp, tools=tools))[:128])
## Explanation of Your _quarto.yml File
Your _quarto.yml file is a configuration file for Quarto, which is a scientific and tec
Callable Client
get_types
get_types (msgs)
get_types(msgs)
['text', 'text', 'tool_use', 'tool_result']
Client.__call__
Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, maxthinktok=0, prefill='', stream:bool=False, stop=None, tools:Optional[list]=None, tool_choice:Optional[dict]=None, metadata:MetadataParam|NotGiven=NOT_GIVEN, stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, temperature:float|NotGiven=NOT_GIVEN, thinking:ThinkingConfigParam|NotGiven=NOT_GIVEN, top_k:int|NotGiven=NOT_GIVEN, top_p:float|NotGiven=NOT_GIVEN, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Make a call to Claude.
Type | Default | Details | |
---|---|---|---|
msgs | list | List of messages in the dialog | |
sp | str | The system prompt | |
temp | int | 0 | Temperature |
maxtok | int | 4096 | Maximum tokens |
maxthinktok | int | 0 | Maximum thinking tokens |
prefill | str | Optional prefill to pass to Claude as start of its response | |
stream | bool | False | Stream response? |
stop | NoneType | None | Stop sequence |
tools | Optional | None | List of tools to make available to Claude |
tool_choice | Optional | None | Optionally force use of some tool |
metadata | MetadataParam | NotGiven | NOT_GIVEN | |
stop_sequences | List[str] | NotGiven | NOT_GIVEN | |
system | Union[str, Iterable[TextBlockParam]] | NotGiven | NOT_GIVEN | |
temperature | float | NotGiven | NOT_GIVEN | |
thinking | ThinkingConfigParam | NotGiven | NOT_GIVEN | |
top_k | int | NotGiven | NOT_GIVEN | |
top_p | float | NotGiven | NOT_GIVEN | |
extra_headers | Optional | None | Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs. The extra values given here take precedence over values defined on the client or passed to this method. |
extra_query | Query | None | None | |
extra_body | Body | None | None | |
timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
Exported source
@patch
@delegates(messages.Messages.create)
def __call__(self:Client,
list, # List of messages in the dialog
msgs:='', # The system prompt
sp=0, # Temperature
temp=4096, # Maximum tokens
maxtok=0, # Maximum thinking tokens
maxthinktok='', # Optional prefill to pass to Claude as start of its response
prefillbool=False, # Stream response?
stream:=None, # Stop sequence
stoplist]=None, # List of tools to make available to Claude
tools:Optional[dict]=None, # Optionally force use of some tool
tool_choice:Optional[**kwargs):
"Make a call to Claude."
if tools: kwargs['tools'] = [get_schema(o) if callable(o) else o for o in listify(tools)]
if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
if maxthinktok:
'thinking']={'type':'enabled', 'budget_tokens':maxthinktok}
kwargs[=1; prefill=''
temp= self._precall(msgs, prefill, stop, kwargs)
msgs if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}"
if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
= self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
res return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
for tools in [sums, [get_schema(sums)]]:
= c(pr, sp=sp, tools=sums, tool_choice='sums')
r print(r)
Message(id='msg_01Cn7G1vgw8YV5eyKXSQFaLQ', content=[ToolUseBlock(id='toolu_01Jy9HjLxc9ND581HqcjzqKd', input={'a': 0}, name='sums', type='tool_use')], model='claude-3-7-sonnet-20250219', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 444; Out: 33; Cache create: 0; Cache read: 0; Total Tokens: 477; Server tool use (web search requests): 0)
Message(id='msg_01JNUGteXRfLQakhSXAV1ehu', content=[ToolUseBlock(id='toolu_01DFHbFskemZbrqew1spHFA4', input={'a': 0}, name='sums', type='tool_use')], model='claude-3-7-sonnet-20250219', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 444; Out: 33; Cache create: 0; Cache read: 0; Total Tokens: 477; Server tool use (web search requests): 0)
= mk_ns(sums)
ns = mk_toolres(r, ns=ns) tr
Finding the sum of 0 and 1
Client.structured
Client.structured (msgs:list, tools:Optional[list]=None, obj:Optional=None, ns:Optional[collections.abc.Mapping]=None, sp='', temp=0, maxtok=4096, maxthinktok=0, prefill='', stream:bool=False, stop=None, tool_choice:Optional[dict]=None, metadata:MetadataParam|NotGiven=NOT_GIVEN, stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Un ion[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN, temperature:float|NotGiven=NOT_GIVEN, thinking:ThinkingConfigParam|NotGiven=NOT_GIVEN, top_k:int|NotGiven=NOT_GIVEN, top_p:float|NotGiven=NOT_GIVEN, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Return the value of all tool calls (generally used for structured outputs)
Type | Default | Details | |
---|---|---|---|
msgs | list | List of messages in the dialog | |
tools | Optional | None | List of tools to make available to Claude |
obj | Optional | None | Class to search for tools |
ns | Optional | None | Namespace to search for tools |
sp | str | The system prompt | |
temp | int | 0 | Temperature |
maxtok | int | 4096 | Maximum tokens |
maxthinktok | int | 0 | Maximum thinking tokens |
prefill | str | Optional prefill to pass to Claude as start of its response | |
stream | bool | False | Stream response? |
stop | NoneType | None | Stop sequence |
tool_choice | Optional | None | Optionally force use of some tool |
metadata | MetadataParam | NotGiven | NOT_GIVEN | |
stop_sequences | List[str] | NotGiven | NOT_GIVEN | |
system | Union[str, Iterable[TextBlockParam]] | NotGiven | NOT_GIVEN | |
temperature | float | NotGiven | NOT_GIVEN | |
thinking | ThinkingConfigParam | NotGiven | NOT_GIVEN | |
top_k | int | NotGiven | NOT_GIVEN | |
top_p | float | NotGiven | NOT_GIVEN | |
extra_headers | Optional | None | Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs. The extra values given here take precedence over values defined on the client or passed to this method. |
extra_query | Query | None | None | |
extra_body | Body | None | None | |
timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
Exported source
@patch
@delegates(Client.__call__)
def structured(self:Client,
list, # List of messages in the dialog
msgs:list]=None, # List of tools to make available to Claude
tools:Optional[=None, # Class to search for tools
obj:Optional=None, # Namespace to search for tools
ns:Optional[abc.Mapping]**kwargs):
"Return the value of all tool calls (generally used for structured outputs)"
= listify(tools)
tools = self(msgs, tools=tools, tool_choice=tools, **kwargs)
res if ns is None: ns=mk_ns(*tools)
if obj is not None: ns = mk_ns(obj)
= getattr(res, 'content', [])
cts = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)]
tcs return tcs
Anthropic’s API does not support response formats directly, so instead we provide a structured
method to use tool calling to achieve the same result. The result of the tool is not passed back to Claude in this case, but instead is returned directly to the user.
=[sums]) c.structured(pr, tools
Finding the sum of 1 and 1
[2]
c
ToolUseBlock(id=‘toolu_01FGiPmwqfSxvSFmEa8FJDAK’, input={‘a’: 1}, name=‘sums’, type=‘tool_use’)
Metric | Count | Cost (USD) |
---|---|---|
Input tokens | 5,860 | 0.017580 |
Output tokens | 1,442 | 0.021630 |
Cache tokens | 0 | 0.000000 |
Server tool use | 0 | 0.000000 |
Total | 7,302 | $0.039210 |
Custom Types with Tools Use
We need to add tool support for custom types too. Let’s test out custom types using a minimal example.
class Book(BasicRepr):
def __init__(self, title: str, pages: int): store_attr()
def __repr__(self):
return f"Book Title : {self.title}\nNumber of Pages : {self.pages}"
"War and Peace", 950) Book(
Book Title : War and Peace
Number of Pages : 950
def find_page(book: Book, # The book to find the halfway point of
int, # Percent of a book to read to, e.g. halfway == 50,
percent: -> int:
) "The page number corresponding to `percent` completion of a book"
return round(book.pages * (percent / 100.0))
get_schema(find_page)
{'name': 'find_page',
'description': 'The page number corresponding to `percent` completion of a book\n\nReturns:\n- type: integer',
'input_schema': {'type': 'object',
'properties': {'book': {'type': 'object',
'description': 'The book to find the halfway point of',
'$ref': '#/$defs/Book'},
'percent': {'type': 'integer',
'description': 'Percent of a book to read to, e.g. halfway == 50,'}},
'title': None,
'required': ['book', 'percent'],
'$defs': {'Book': {'type': 'object',
'properties': {'title': {'type': 'string', 'description': ''},
'pages': {'type': 'integer', 'description': ''}},
'title': 'Book',
'required': ['title', 'pages']}}}}
= mk_tool_choice('find_page')
choice choice
{'type': 'tool', 'name': 'find_page'}
Claudette will pack objects as dict, so we’ll transform tool functions with user-defined types into tool functions that accept a dict in lieu of the user-defined type.
First let’s convert a single argument:
_is_builtin
decides whether to pass an argument through as-is. Let’s check the argument conversion:
int), _is_builtin(Book), _is_builtin(List)) (_is_builtin(
(True, False, True)
555, int),
(_convert("title": "War and Peace", "pages": 923}, Book),
_convert({1, 2, 3, 4], List)) _convert([
(555,
Book Title : War and Peace
Number of Pages : 923,
[1, 2, 3, 4])
To apply tool()
to a function is to return a new function where the user-defined types are replaced with dictionary inputs.
tool
tool (func)
A function is transformed into a function with dict arguments substituted for user-defined types. Built-in types such as percent
here are left untouched.
=Book("War and Peace", 950), percent=50) find_page(book
475
"title": "War and Peace", "pages": 950}, percent=50) tool(find_page)({
475
By passing tools wrapped by tool()
, user-defined types now work completes without failing in tool calls.
= "How many pages do I have to read to get halfway through my 950 page copy of War and Peace"
pr = tool(find_page)
tools tools
<function __main__.find_page(book: __main__.Book, percent: int) -> int>
= c(pr, tools=[tools])
r find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_012HsKTgeqwpJSBwoPxJaTiZ', input={'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50}, name='find_page', type='tool_use')
= mk_toolres(r, ns=[tools])
tr tr
[{'role': 'assistant',
'content': [{'citations': None,
'text': 'I can help you find the halfway point of your book. Let me calculate how many pages you need to read to get halfway through your 950-page copy of War and Peace.',
'type': 'text'},
{'id': 'toolu_012HsKTgeqwpJSBwoPxJaTiZ',
'input': {'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50},
'name': 'find_page',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_012HsKTgeqwpJSBwoPxJaTiZ',
'content': '475'}]}]
= [pr]+tr
msgs =sp, tools=[tools])) contents(c(msgs, sp
'You need to read 475 pages to reach the halfway point of your 950-page copy of War and Peace.'
Chat
Rather than manually adding the responses to a dialog, we’ll create a simple Chat
class to do that for us, each time we make a request. We’ll also store the system prompt and tools here, to avoid passing them every time.
Chat
Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, sp='', tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None, cache:bool=False, hist:list=None, ns:Optional[collections.abc.Mapping]=None)
Anthropic chat client.
Type | Default | Details | |
---|---|---|---|
model | Optional | None | Model to use (leave empty if passing cli ) |
cli | Optional | None | Client to use (leave empty if passing model ) |
sp | str | Optional system prompt | |
tools | Optional | None | List of tools to make available to Claude |
temp | int | 0 | Temperature |
cont_pr | Optional | None | User prompt to continue an assistant response |
cache | bool | False | Use Claude cache? |
hist | list | None | Initialize history |
ns | Optional | None | Namespace to search for tools |
The class stores the Client
that will provide the responses in c
, and a history of messages in h
.
= "Never mention what tools you use."
sp = Chat(model, sp=sp)
chat chat.c.use, chat.h
(In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0,
[])
chat.c.use.cost(pricing[model_types[chat.c.model]])
0.0
This is clunky. Let’s add cost
as a property for the Chat
class. It will pass in the appropriate prices for the current model to the usage cost calculator.
Chat.cost
Chat.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Chat) -> float: return self.c.cost
chat.cost
0.0
Chat.__call__
Chat.__call__ (pr=None, temp=None, maxtok=4096, maxthinktok=0, stream=False, prefill='', tool_choice:Optional[dict]=None, **kw)
Call self as a function.
Type | Default | Details | |
---|---|---|---|
pr | NoneType | None | Prompt / message |
temp | NoneType | None | Temperature |
maxtok | int | 4096 | Maximum tokens |
maxthinktok | int | 0 | Maximum thinking tokens |
stream | bool | False | Stream response? |
prefill | str | Optional prefill to pass to Claude as start of its response | |
tool_choice | Optional | None | Optionally force use of some tool |
kw | VAR_KEYWORD |
Exported source
@patch
def _stream(self:Chat, res):
yield from res
self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
Exported source
@patch
def _post_pr(self:Chat, pr, prev_role):
if pr is None and prev_role == 'assistant':
if self.cont_pr is None:
raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.")
= self.cont_pr # No user prompt, keep the chain
pr if pr: self.h.append(mk_msg(pr, cache=self.cache))
Exported source
@patch
def _append_pr(self:Chat,
=None, # Prompt / message
pr
):= nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user'
prev_role if pr and prev_role == 'user': self() # already user request pending
self._post_pr(pr, prev_role)
Exported source
@patch
def __call__(self:Chat,
=None, # Prompt / message
pr=None, # Temperature
temp=4096, # Maximum tokens
maxtok=0, # Maximum thinking tokens
maxthinktok=False, # Stream response?
stream='', # Optional prefill to pass to Claude as start of its response
prefilldict]=None, # Optionally force use of some tool
tool_choice:Optional[**kw):
if temp is None: temp=self.temp
self._append_pr(pr)
= self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok, tools=self.tools, tool_choice=tool_choice,**kw)
res if stream: return self._stream(res)
self.h += mk_toolres(self.c.result, ns=self.ns)
return res
The __call__
method just passes the request along to the Client
, but rather than just passing in this one prompt, it appends it to the history and passes it all along. As a result, we now have state!
= Chat(model, sp=sp) chat
"I'm Jeremy")
chat("What's my name?") chat(
Your name is Jeremy, as you mentioned in your previous message.
- id:
msg_018q1k9EPACg8QjBAeP7qpE1
- content:
[{'citations': None, 'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 59, 'output_tokens': 16, 'server_tool_use': None}
chat.use, chat.cost
(In: 76; Out: 50; Cache create: 0; Cache read: 0; Total Tokens: 126; Server tool use (web search requests): 0,
0.000978)
Let’s try out prefill too:
= "Concisely, what is the meaning of life?"
q = 'According to Douglas Adams,' pref
chat.c.result
Your name is Jeremy, as you mentioned in your previous message.
- id:
msg_018q1k9EPACg8QjBAeP7qpE1
- content:
[{'citations': None, 'text': 'Your name is Jeremy, as you mentioned in your previous message.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 59, 'output_tokens': 16, 'server_tool_use': None}
=pref) chat(q, prefill
According to Douglas Adams, 42. More seriously, the meaning of life is likely what you create through your relationships, pursuits, and values.
- id:
msg_01QmdBDZ7vf8pLcHoTCqsmQn
- content:
[{'citations': None, 'text': 'According to Douglas Adams, 42. More seriously, the meaning of life is likely what you create through your relationships, pursuits, and values.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 95, 'output_tokens': 28, 'server_tool_use': None}
By default messages must be in user, assistant, user format. If this isn’t followed (aka calling chat()
without a user message) it will error out:
try: chat()
except ValueError as e: print("Error:", e)
Error: Prompt must be given after assistant completion, or use `self.cont_pr`.
Setting cont_pr
allows a “default prompt” to be specified when a prompt isn’t specified. Usually used to prompt the model to continue.
= "keep going..."
chat.cont_pr chat()
The meaning of life varies across philosophical traditions: finding happiness, serving others, seeking knowledge, fulfilling one’s potential, or connecting with something greater than oneself. Ultimately, many find meaning in personal growth, loving relationships, contributing to society, and pursuing what brings them genuine fulfillment. Rather than a single universal answer, meaning often emerges from our individual journeys and choices.
- id:
msg_01YJDurNFrknbtQjLMUGkL6S
- content:
[{'citations': None, 'text': "The meaning of life varies across philosophical traditions: finding happiness, serving others, seeking knowledge, fulfilling one's potential, or connecting with something greater than oneself. Ultimately, many find meaning in personal growth, loving relationships, contributing to society, and pursuing what brings them genuine fulfillment. Rather than a single universal answer, meaning often emerges from our individual journeys and choices.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 129, 'output_tokens': 82, 'server_tool_use': None}
We can also use streaming:
= Chat(model, sp=sp)
chat for o in chat("I'm Jeremy", stream=True): print(o, end='')
Hello Jeremy! It's nice to meet you. How are you doing today? Is there something I can help you with or would you like to chat?
for o in chat(q, prefill=pref, stream=True): print(o, end='')
According to Douglas Adams, it's 42. More seriously, the meaning of life is deeply personal - many find it in relationships, creating positive impact, pursuing passions, or spiritual fulfillment. There's no universal answer; meaning is what you choose to create.
You can provide a history of messages to initialise Chat
with:
= Chat(model, sp=sp, hist=["Can you guess my name?", "Hmmm I really don't know. Is it 'Merlin G. Penfolds'?"])
chat 'Wow how did you know?') chat(
I didn’t actually know your name! I was just making a random guess for fun. It’s quite surprising that I happened to guess correctly. What are the chances of that?
If you’d like, you can share how you’d prefer me to address you in our conversation.
- id:
msg_012dvDUX8udLpFYPNfUa2sv5
- content:
[{'citations': None, 'text': "I didn't actually know your name! I was just making a random guess for fun. It's quite surprising that I happened to guess correctly. What are the chances of that?\n\nIf you'd like, you can share how you'd prefer me to address you in our conversation.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 58, 'output_tokens': 60, 'server_tool_use': None}
Chat tool use
We automagically get streamlined tool use as well:
= f"What is {a}+{b}?"
pr pr
'What is 604542+6458932?'
= Chat(model, sp=sp, tools=[sums])
chat = chat(pr)
r r
Finding the sum of 604542 and 6458932
I’ll calculate the sum of those two numbers for you.
- id:
msg_01QFwCbk595VHhnKqqZc2EMi
- content:
[{'citations': None, 'text': "I'll calculate the sum of those two numbers for you.", 'type': 'text'}, {'id': 'toolu_016ViHPsnWnzVq65MwRyr8gW', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
tool_use
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 437, 'output_tokens': 85, 'server_tool_use': None}
Now we need to send this result to Claude—calling the object with no parameters tells it to return the tool result to Claude:
chat()
The sum of 604542 and 6458932 is 7,063,474.
- id:
msg_01PaLtHRATzhFrQFV3eg82pG
- content:
[{'citations': None, 'text': 'The sum of 604542 and 6458932 is 7,063,474.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 536, 'output_tokens': 25, 'server_tool_use': None}
It should be correct, because it actually used our Python function to do the addition. Let’s check:
+b a
7063474
Let’s test a function with user defined types.
= Chat(model, sp=sp, tools=[find_page])
chat = chat("How many pages is three quarters of the way through my 80 page edition of Tao Te Ching?")
r r
To find out how many pages is three quarters of the way through your 80-page edition of Tao Te Ching, I’ll calculate that for you.
- id:
msg_01FPy3rjcqggtxdRxgHG37PP
- content:
[{'citations': None, 'text': "To find out how many pages is three quarters of the way through your 80-page edition of Tao Te Ching, I'll calculate that for you.", 'type': 'text'}, {'id': 'toolu_013YVRza4qFjXh7Qv9aryPp4', 'input': {'book': {'title': 'Tao Te Ching', 'pages': 80}, 'percent': 75}, 'name': 'find_page', 'type': 'tool_use'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
tool_use
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 547, 'output_tokens': 122, 'server_tool_use': None}
Now we need to send this result to Claude—calling the object with no parameters tells it to return the tool result to Claude:
chat()
Three quarters (75%) of the way through your 80-page edition of Tao Te Ching would be page 60.
- id:
msg_01JUU6YMCqFBW1shmzM7Fj8w
- content:
[{'citations': None, 'text': 'Three quarters (75%) of the way through your 80-page edition of Tao Te Ching would be page 60.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 681, 'output_tokens': 33, 'server_tool_use': None}
It should be correct, because it actually used our Python function to do the addition. Let’s check:
80 * .75
60.0
= Chat(model, tools=[text_editor_conf['sonnet']], ns=mk_ns(str_replace_editor)) chat
Note that mk_ns(str_replace_editor)
is used here. When not providing tools directly as Python functions (like sum
), you must create and pass a namespace dictionary (mapping the tool name string to the function object) using the ns
parameter to methods like mk_toolres
or toolloop
. toolslm
cannot automatically generate the namespace in this case. For schema-based tools (i.e., Python functions), claudette
handles namespace creation automatically.
= chat('Please explain what my _quarto.yml does. Use your tools')
r find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_01E6LUbTPMsTGZMbiSMbQ3vm', input={'command': 'view', 'path': '_quarto.yml'}, name='str_replace_editor', type='tool_use')
chat()
Explanation of Your _quarto.yml File
Your _quarto.yml is a configuration file for Quarto, which is a scientific and technical publishing system. Here’s a breakdown of what it does:
Project Configuration
project.type: website
: Defines this as a website project (as opposed to a book or other format)project.resources: ["*.txt"]
: Includes all .txt files as resources in the buildproject.preview.port: 3000
: Sets the preview server to run on port 3000project.preview.browser: false
: Prevents automatically opening a browser when previewing
Format Settings
format.html
: Configures HTML output with several settings:theme: cosmo
: Uses the “cosmo” theme for stylingcss: styles.css
: Applies additional custom styles from styles.csstoc: true
: Enables table of contentscode-tools: true
: Enables code tools (like copy button)code-block-bg: true
: Adds background to code blockscode-block-border-left: "#31BAE9"
: Sets a blue left border for code blockshighlight-style: arrow
: Uses the “arrow” syntax highlighting stylegrid
: Configures the page layout with specific widths for sidebar (180px), body (1800px), margins (150px), and gutters (1.0rem)keep-md: true
: Preserves Markdown files after rendering
format.commonmark: default
: Also enables CommonMark format with default settings
Website Configuration
website.twitter-card: true
: Enables Twitter card metadatawebsite.open-graph: true
: Enables Open Graph metadata for social media sharingwebsite.repo-actions: [issue]
: Adds an “issue” button for repository actionswebsite.navbar.background: primary
: Sets the navbar background to the primary theme colorwebsite.navbar.search: true
: Enables search functionality in the navbarwebsite.sidebar.style: floating
: Uses a floating style for the sidebar
Metadata Files
- Includes two external metadata files:
nbdev.yml
: Likely contains nbdev-specific configurations (nbdev is a library for developing Python packages)sidebar.yml
: Likely contains sidebar navigation structure
This configuration sets up a website with good code display features, responsive layout, and integration with development tools. It appears to be designed for technical documentation, possibly for a Python package using nbdev.
- id:
msg_016Hxyn9LbZ5EuhYVXYSUoPv
- content:
[{'citations': None, 'text': '# Explanation of Your _quarto.yml File\n\nYour _quarto.yml is a configuration file for Quarto, which is a scientific and technical publishing system. Here\'s a breakdown of what it does:\n\n## Project Configuration\n-
project.type: website: Defines this as a website project (as opposed to a book or other format)\n-
project.resources: [“*.txt”]: Includes all .txt files as resources in the build\n-
project.preview.port: 3000: Sets the preview server to run on port 3000\n-
project.preview.browser: false: Prevents automatically opening a browser when previewing\n\n## Format Settings\n-
format.html: Configures HTML output with several settings:\n -
theme: cosmo: Uses the "cosmo" theme for styling\n -
css: styles.css: Applies additional custom styles from styles.css\n -
toc: true: Enables table of contents\n -
code-tools: true: Enables code tools (like copy button)\n -
code-block-bg: true: Adds background to code blocks\n -
code-block-border-left: “#31BAE9”: Sets a blue left border for code blocks\n -
highlight-style: arrow: Uses the "arrow" syntax highlighting style\n -
grid: Configures the page layout with specific widths for sidebar (180px), body (1800px), margins (150px), and gutters (1.0rem)\n -
keep-md: true: Preserves Markdown files after rendering\n\n-
format.commonmark: default: Also enables CommonMark format with default settings\n\n## Website Configuration\n-
website.twitter-card: true: Enables Twitter card metadata\n-
website.open-graph: true: Enables Open Graph metadata for social media sharing\n-
website.repo-actions: [issue]: Adds an "issue" button for repository actions\n-
website.navbar.background: primary: Sets the navbar background to the primary theme color\n-
website.navbar.search: true: Enables search functionality in the navbar\n-
website.sidebar.style: floating: Uses a floating style for the sidebar\n\n## Metadata Files\n- Includes two external metadata files:\n -
nbdev.yml: Likely contains nbdev-specific configurations (nbdev is a library for developing Python packages)\n -
sidebar.yml: Likely contains sidebar navigation structure\n\nThis configuration sets up a website with good code display features, responsive layout, and integration with development tools. It appears to be designed for technical documentation, possibly for a Python package using nbdev.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1372, 'output_tokens': 593, 'server_tool_use': None}
Exported source
@patch
def _repr_markdown_(self:Chat):
if not hasattr(self.c, 'result'): return 'No results yet'
= contents(self.c.result)
last_msg
def fmt_msg(m):
= contents(m)
t if isinstance(t, dict): return t['content']
return t
= '\n\n'.join(f"**{m['role']}**: {fmt_msg(m)}"
history for m in self.h)
= self.c._repr_markdown_().split('\n\n')[-1]
det return f"""{last_msg}
<details>
<summary>History</summary>
{history}
</details>
{det}"""
chat
Explanation of Your _quarto.yml File
Your _quarto.yml is a configuration file for Quarto, which is a scientific and technical publishing system. Here’s a breakdown of what it does:
Project Configuration
project.type: website
: Defines this as a website project (as opposed to a book or other format)project.resources: ["*.txt"]
: Includes all .txt files as resources in the buildproject.preview.port: 3000
: Sets the preview server to run on port 3000project.preview.browser: false
: Prevents automatically opening a browser when previewing
Format Settings
format.html
: Configures HTML output with several settings:theme: cosmo
: Uses the “cosmo” theme for stylingcss: styles.css
: Applies additional custom styles from styles.csstoc: true
: Enables table of contentscode-tools: true
: Enables code tools (like copy button)code-block-bg: true
: Adds background to code blockscode-block-border-left: "#31BAE9"
: Sets a blue left border for code blockshighlight-style: arrow
: Uses the “arrow” syntax highlighting stylegrid
: Configures the page layout with specific widths for sidebar (180px), body (1800px), margins (150px), and gutters (1.0rem)keep-md: true
: Preserves Markdown files after rendering
format.commonmark: default
: Also enables CommonMark format with default settings
Website Configuration
website.twitter-card: true
: Enables Twitter card metadatawebsite.open-graph: true
: Enables Open Graph metadata for social media sharingwebsite.repo-actions: [issue]
: Adds an “issue” button for repository actionswebsite.navbar.background: primary
: Sets the navbar background to the primary theme colorwebsite.navbar.search: true
: Enables search functionality in the navbarwebsite.sidebar.style: floating
: Uses a floating style for the sidebar
Metadata Files
- Includes two external metadata files:
nbdev.yml
: Likely contains nbdev-specific configurations (nbdev is a library for developing Python packages)sidebar.yml
: Likely contains sidebar navigation structure
This configuration sets up a website with good code display features, responsive layout, and integration with development tools. It appears to be designed for technical documentation, possibly for a Python package using nbdev.
History
user: P
assistant: I’ll examine your _quarto.yml file to explain what it does. Let me first view the file.
user: project: type: website resources: - “*.txt” preview: port: 3000 browser: false
format: html: theme: cosmo css: styles.css toc: true code-tools: true code-block-bg: true code-block-border-left: “#31BAE9” highlight-style: arrow grid: sidebar-width: 180px body-width: 1800px margin-width: 150px gutter-width: 1.0rem keep-md: true commonmark: default
website: twitter-card: true open-graph: true repo-actions: [issue] navbar: background: primary search: true sidebar: style: floating
metadata-files: - nbdev.yml - sidebar.yml
assistant: # Explanation of Your _quarto.yml File
Your _quarto.yml is a configuration file for Quarto, which is a scientific and technical publishing system. Here’s a breakdown of what it does:
Project Configuration
project.type: website
: Defines this as a website project (as opposed to a book or other format)project.resources: ["*.txt"]
: Includes all .txt files as resources in the buildproject.preview.port: 3000
: Sets the preview server to run on port 3000project.preview.browser: false
: Prevents automatically opening a browser when previewing
Format Settings
format.html
: Configures HTML output with several settings:theme: cosmo
: Uses the “cosmo” theme for stylingcss: styles.css
: Applies additional custom styles from styles.csstoc: true
: Enables table of contentscode-tools: true
: Enables code tools (like copy button)code-block-bg: true
: Adds background to code blockscode-block-border-left: "#31BAE9"
: Sets a blue left border for code blockshighlight-style: arrow
: Uses the “arrow” syntax highlighting stylegrid
: Configures the page layout with specific widths for sidebar (180px), body (1800px), margins (150px), and gutters (1.0rem)keep-md: true
: Preserves Markdown files after rendering
format.commonmark: default
: Also enables CommonMark format with default settings
Website Configuration
website.twitter-card: true
: Enables Twitter card metadatawebsite.open-graph: true
: Enables Open Graph metadata for social media sharingwebsite.repo-actions: [issue]
: Adds an “issue” button for repository actionswebsite.navbar.background: primary
: Sets the navbar background to the primary theme colorwebsite.navbar.search: true
: Enables search functionality in the navbarwebsite.sidebar.style: floating
: Uses a floating style for the sidebar
Metadata Files
- Includes two external metadata files:
nbdev.yml
: Likely contains nbdev-specific configurations (nbdev is a library for developing Python packages)sidebar.yml
: Likely contains sidebar navigation structure
Metric | Count | Cost (USD) |
---|---|---|
Input tokens | 2,408 | 0.007224 |
Output tokens | 693 | 0.010395 |
Cache tokens | 0 | 0.000000 |
Server tool use | 0 | 0.000000 |
Total | 3,101 | $0.017619 |
Images
Claude can handle image data as well. As everyone knows, when testing image APIs you have to use a cute puppy.
# Image is Cute_dog.jpg from Wikimedia
= Path('samples/puppy.jpg')
fn =fn, width=200) display.Image(filename
= fn.read_bytes() img
Claude expects an image message to have the following structure
{'role': 'user',
'content': [
'type':'text', 'text':'What is in the image?'},
{
{'type':'image',
'source': {
'type':'base64', 'media_type':'media_type', 'data': 'data'
}
}
] }
msglm
automatically detects if a message is an image, encodes it, and generates the data structure above. All we need to do is a create a list containing our image and a query and then pass it to mk_msg
.
Let’s try it out…
= "In brief, what color flowers are in this image?"
q = mk_msg([img, q]) msg
c([msg])
The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers or asters blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy’s white and reddish-brown fur.
- id:
msg_01SD3FDQbuJ4x98uNv62K2Xp
- content:
[{'citations': None, 'text': "The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers or asters blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy's white and reddish-brown fur.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 75, 'server_tool_use': None}
You don’t need to call mk_msg
on each individual message before passing them to the Chat
class. Instead you can pass your messages in a list and the Chat
class will automatically call mk_msgs
in the background.
"How are you?", r]) c([
For messages that contain multiple content types (like an image with a question), you’ll need to enclose the message contents in a list as shown below:
"How are you?", r, [img, q]]) c([
= Chat(model)
c c([img, q])
The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy’s white and reddish-brown fur.
- id:
msg_01VFVzc4JCVU1ZYxCiY5PBqb
- content:
[{'citations': None, 'text': "The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy's white and reddish-brown fur.", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 72, 'server_tool_use': None}
def contents(r):
"Helper to get the contents from Claude response `r`."
= find_block(r)
blk if not blk and r.content: blk = r.content[0]
if hasattr(blk,'text'): return blk.text.strip()
elif hasattr(blk,'content'): return blk.content.strip()
elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
return str(blk)
0]) contents(c.h[
'*Media Type - image*'
c
The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy’s white and reddish-brown fur.
History
user: Media Type - image
assistant: The flowers in the image are purple/lavender in color. They appear to be small daisy-like flowers blooming next to where the adorable Cavalier King Charles Spaniel puppy is resting on the grass. The purple flowers create a nice contrast with the puppy’s white and reddish-brown fur.Metric | Count | Cost (USD) |
---|---|---|
Input tokens | 110 | 0.000330 |
Output tokens | 72 | 0.001080 |
Cache tokens | 0 | 0.000000 |
Server tool use | 0 | 0.000000 |
Total | 182 | $0.001410 |
Unfortunately, not all Claude models support images 😞. This table summarizes the capabilities of each Claude model and the different modalities they support.
Caching
Claude supports context caching by adding a cache_control
header to the message content.
{"role": "user",
"content": [
{"type": "text",
"text": "Please cache my message",
"cache_control": {"type": "ephemeral"}
}
] }
To cache a message, we simply set cache=True
when calling mk_msg
.
'hi', 'there'], cache=True) mk_msg([
{ 'content': [ {'text': 'hi', 'type': 'text'},
{ 'cache_control': {'type': 'ephemeral'},
'text': 'there',
'type': 'text'}],
'role': 'user'}
Claude also now supports smart cache look-ups, so it’s very simple to keep an entire conversation in cache by constantly telling it to update the cache with the latest message. To do this, we just need to set cache=True
when creating a Chat
.
= Chat(model, sp=sp, cache=True) chat
Caching has a minimum token limit of 1024 tokens for Sonnet and Opus, and 2048 for Haiku. If your conversation is below this limit, it will not be cached.
"Hi, I'm Jeremy.") chat(
Hello Jeremy! It’s nice to meet you. How are you doing today? Is there something I can help you with or would you like to chat?
- id:
msg_01NRp9ys88wxbuTysaZnausb
- content:
[{'citations': None, 'text': "Hello Jeremy! It's nice to meet you. How are you doing today? Is there something I can help you with or would you like to chat?", 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 20, 'output_tokens': 34, 'server_tool_use': None}
Note the usage: no cache is created, nor used. Now, let’s send a long enough message to trigger caching.
"""Lorem ipsum dolor sit amet""" * 150) chat(
I notice you’ve sent a large amount of “Lorem ipsum” text, which is commonly used as placeholder or filler text in design and publishing.
Is there something specific you’d like to discuss or a question you have? I’m here to help with meaningful conversation or information if you need it. If you’re testing something or just curious about how I respond, feel free to let me know what you’re looking for.
- id:
msg_01H6ovSz8T1V2rj4iynbeKvH
- content:
[{'citations': None, 'text': 'I notice you\'ve sent a large amount of "Lorem ipsum" text, which is commonly used as placeholder or filler text in design and publishing. \n\nIs there something specific you\'d like to discuss or a question you have? I\'m here to help with meaningful conversation or information if you need it. If you\'re testing something or just curious about how I respond, feel free to let me know what you\'re looking for.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 1101, 'cache_read_input_tokens': 0, 'input_tokens': 4, 'output_tokens': 90, 'server_tool_use': None}
The context is now long enough for cache to be used. All the conversation history has now been written to the temporary cache. Any subsequent message will read from it rather than re-processing the entire conversation history.
"Oh thank you! Sorry, my lorem ipsum generator got out of control!") chat(
No problem at all! Those lorem ipsum generators can certainly get enthusiastic sometimes. It happens to the best of us! Is there something I can actually help you with today?
- id:
msg_01KsvSSaJjpuRst2BejF2LbT
- content:
[{'citations': None, 'text': 'No problem at all! Those lorem ipsum generators can certainly get enthusiastic sometimes. It happens to the best of us! Is there something I can actually help you with today?', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 108, 'cache_read_input_tokens': 1101, 'input_tokens': 4, 'output_tokens': 38, 'server_tool_use': None}
Extended Thinking
Claude 3.7 Sonnet has enhanced reasoning capabilities for complex tasks. See docs for more info.
We can enable extended thinking by passing a thinking
param with the following structure.
={
thinking"type": "enabled",
"budget_tokens": 16000
}
When extended thinking is enabled a thinking block is included in the response as shown below.
{"content": [
{"type": "thinking",
"thinking": "To approach this, let's think about...",
"signature": "Imtakcjsu38219c0.eyJoYXNoIjoiYWJjM0NTY3fQ...."
,
}
{"type": "text",
"text": "Yes, there are infinitely many prime numbers such that..."
}
] }
Let’s add a maxthinktok
param to the Client
and Chat
call methods. When this value is not 0, we’ll pass a thinking param to Claude {"type":"enabled", "budget_tokens":maxthinktok}
.
Note: When thinking is enabled prefill
must be empty and the temp
must be 1.
think_md
think_md (txt, thk)
def contents(r):
"Helper to get the contents from Claude response `r`."
= find_block(r)
blk = find_block(r, blk_type=ThinkingBlock)
tk_blk if tk_blk: return think_md(blk.text.strip(), tk_blk.thinking.strip())
if not blk and r.content: blk = r.content[0]
if hasattr(blk,'text'): return blk.text.strip()
elif hasattr(blk,'content'): return blk.content.strip()
elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
return str(blk)
Let’s call the model without extended thinking enabled.
= first(has_extended_thinking_models) tk_model
= Chat(tk_model) chat
"Write a sentence about Python!") chat(
Python is a versatile programming language known for its readable syntax and wide application in fields ranging from web development to data science and artificial intelligence.
- id:
msg_01S9DaYTNoo31yNh6fHZcQZc
- content:
[{'citations': None, 'text': 'Python is a versatile programming language known for its readable syntax and wide application in fields ranging from web development to data science and artificial intelligence.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 13, 'output_tokens': 31, 'server_tool_use': None}
Now, let’s call the model with extended thinking enabled.
"Write a sentence about Python!", maxthinktok=1024) chat(
Python is a beginner-friendly, high-level programming language with an extensive ecosystem of libraries that has become one of the most popular tools for both quick scripts and enterprise-level applications.
Thinking
The person is asking me to write a sentence about Python again. I should provide a different sentence than before to offer variety. Let me think of another aspect of Python to highlight, such as its community, ease of learning, libraries, or another key feature that makes Python popular.- id:
msg_01JiyHw3dYuqgbdHqi3jNvTd
- content:
[{'signature': 'ErUBCkYIAxgCIkDGyn1SbTWjPjZr8XN8EIk5Y7YWiDvWZT2A4r+vxn8pzP6hNNkf2MmWpOqe5a42Ap8tlgvNDLg/98+wbkAbdLVWEgx/k1Ann3OFL7gBruwaDEyTKpSxI5/ZwVXm2CIwtZtisIvmybu3gAfvnhosRn3aChBZjl+RIBCAEs+i5BKE142qhTJM2d0XQk+JnBhWKh0/GloGOLUPWVsV/HpAFB61np8MTxhGhCCf04mhDRgC', 'thinking': 'The person is asking me to write a sentence about Python again. I should provide a different sentence than before to offer variety. Let me think of another aspect of Python to highlight, such as its community, ease of learning, libraries, or another key feature that makes Python popular.', 'type': 'thinking'}, {'citations': None, 'text': 'Python is a beginner-friendly, high-level programming language with an extensive ecosystem of libraries that has become one of the most popular tools for both quick scripts and enterprise-level applications.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 81, 'output_tokens': 106, 'server_tool_use': None}
Server Tools and Web Search
The str_replace
special tool type is a client side tool, i.e., one where we provide the implementation. However, Anthropic also supports server side tools. The current one available is their search tool, which you can find the documentation for here. When provided as a tool to claude, claude can decide to search the web in order to answer or solve the task at hand.
search_conf
search_conf (max_uses:int=None, allowed_domains:list=None, blocked_domains:list=None, user_location:dict=None)
Little helper to create a search tool config
Similar to client side tools, you provide to the tools
argument in the anthropic api a non-schema dictionary with the tool’s name, type, and any additional metadata specific to that tool. Here’s a function to make that process easier for the web search tool.
search_conf()
{'type': 'web_search_20250305', 'name': 'web_search'}
The web search tool returns a list of TextBlock
s comprised of response text from the model, ServerToolUseBlock
and server tool results block such as WebSearchToolResultBlock
. Some of these TextBlock
s will contain citations with references to the results of the web search tool. Here is what all this looks like:
{"content": [
{"type": "text",
"text": "I'll check the current weather in...",
,
}
{"type": "server_tool_use",
"name": "web_search",
"input": {"query": "San Diego weather forecast today May 12 2025"},
"id":"srvtoolu_014t7fS449voTHRCVzi5jQGC"
,
}
{"type": "web_search_tool_result",
"tool_use_id": "srvtoolu_014t7fS449voTHRCVzi5jQGC",
"content": [
"type": "web_search_result",
"title": "Heat Advisory issued May 9...",
"url": "https://kesq.com/weather/...",
"page_age": "3 days ago",
"encrypted_content": "ErgECioIAxgCIiQ4ODk4YTFkY...",
...
]
}
{"type": "text",
"citations": [
{"cited_text": 'The average temperature during this month...',
"title": "Weather San Diego in May 2025:...",
"url": "https://en.climate-data.org/...",
"encrypted_index": "EpMBCioIAxgCIiQ4ODk4YTF..."
},
]"text": "The average temperature in San Diego during May is..."
,
}...
] }
Let’s update our contents
function to handle these cases. For handling citations, we will use the excellent reference syntax in markdown to make clickable citation links.
find_blocks
find_blocks (r, blk_type=<class 'anthropic.types.text_block.TextBlock'>, type='text')
Helper to find all blocks of type blk_type
in response r
.
fmt_txt
fmt_txt (txt_blks)
Helper to get the contents from a list of TextBlock
s, with citations.
contents
contents (r)
Helper to get the contents from Claude response r
.
= Chat(model, sp='Be concise in your responses.', tools=[search_conf()], cache=True)
chat = 'What is the weather in San Diego?'
pr = chat(pr)
r r
I’ll check the current weather in San Diego for you.
Based on the search results, here’s the current weather in San Diego:
May Gray has returned to San Diego after a record hot weekend, with temperatures dropping dramatically. 1
Today’s (May 13, 2025) expected high temperatures in San Diego are: - Coast: 60-66°F - Inland: 59-66°F - Mountains: 48-60°F - Deserts: 77-80°F 2
The day is expected to have “May Gray” conditions with limited clearing into the afternoon. There might be heavy mist, drizzle, or a few light showers, with the best chance of light rain having been overnight into this morning. 3
The mountains and desert areas are experiencing gusty winds with a Wind Advisory in effect until 5am Wednesday. West and southwesterly winds of 20 to 35mph with gusts of 40 to 65mph are expected, which may cause blowing dust and dirt leading to poor air quality at times. 4
The air quality is generally acceptable for most individuals, though sensitive groups may experience minor to moderate symptoms from long-term exposure. 5
Looking ahead, sunny skies and warming temperatures are expected early next week. 6
- id:
msg_01AqayAjaFZrCf6jEbpBqrW3
- content:
[{'citations': None, 'text': "I'll check the current weather in San Diego for you.", 'type': 'text'}, {'id': 'srvtoolu_01CvUbkC6o551QbYJAGPGrTk', 'input': {'query': 'San Diego weather today May 13 2025'}, 'name': 'web_search', 'type': 'server_tool_use'}, {'content': [{'encrypted_content': 'EoUQCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDHnJEQlu+/96g32zJxoMgmHzIGTNp92C9/usIjDPVyEAOIChOhGK01/QyHl64C6j0KIvnm4TlDzw0OCfaXgnvL9uSAA42TjSvcK9wk0qiA9UuGYCNgMbVhko9DcnbCCAHREeyazuXiJVEtFZ9Tjh6hEdChQ66FctKUo3FTr9UPFrcvB3VcQdJoRXH70GFf1tnhinNLhzsb7N7GGOeJ39nB3IW2SB/1/xdc8zRGA0vxe/P4OjXodQ7A/XFbzxMiDCs0M4pXMhpasidJ6UaMmZB1AcLjDsX/9lLDr2ZdRX1z80AW/8yfm2E9ijJaISGyi8S0mA78V9NBiCN305aLGPi8B4WIzRghobL22VXywb5OccsR41pBVAuZUI3fXDhXEW9dVWW/MmyPO2W32pYcO5li32tQ4fiyMN2NXQgPXVarf0vJQlCNGhsrBMjAwG8pk0u04ja/03NjiKDGTgTPwo/Gwlf20qB4S67mgm4hKg7F3IbcwlzsYZGqhCvxHGtLajHyT0g88aXxbE4DTMPpthl1kiOSiAHadbatDT1Sv1e3P7WCY6drjih4UIJIIT1SK7flqlxuEmz/w7Y3jgQAit7XVXgmyiMRhk3PARFDOL1yFG/lvqYQf6cX9c120KfsrpX0Oi/2EkbIzhH/KKOIDGDDtKK9rfrXGgk/3d1GeTkLA/4j0BeyDENiaSBB6B7hvtI820MlXCbHxWUFpiNgvvt8va/auD8txuNCfWCkTQtuwQCKpZNGC0FTqKK67vvV8O+Ixort7sptXl7G3JBhJr3LugDtIBy4Jbldf8ciQvj9dm8XVxVm4le8GfoaKaycwXus0O8Jq9M/hU+2ppKkIKLKKjvU2pDbpDdHqKzS7c7DJmWQLH/XmMO1JAJPvMlnqWQJN4GGtCk+WJi1jqNRmRUy6ernrmd6WtBqx2/dJ86uJ+/2Nzj+vSncY4xxKx+vldxQPlWHG8Hey22oUcjgCv/XHjVV67csDXtrvS9Vng1qIL52re71bU1/Tc90iOrwWYZ56qnOEGIYu5RDniuZJpBdA0HUXO4ulwjl9exqH2cVgzZnRL6YpWe/otVu2iNO8qtYJRgqD5+dkv4rY6DQE1ErBsDd0sMw6FxGr+wAQBQN8258Xv7GcI+FDVfEb9njldXR0QF7Mq9vwXZL2WhAvLg1iDPTpvTj24iHyn6XBr9zCKi6RVAunoOw8HIbqoBkecLbJf6s+nxC/ywIV6HWlH2rx4ShKiH3G/jo+4pU64/8FTva4jST3OSriI2EBGhsLwS7mJWMANCOuMJ3k4emHqHx8EAGP84Y216bUiEg0ZMGeGHPFpF0LuJwPoioDj5YUPLZxUoDO0/q8IZ6TQdm77lrVVNtRYpamu5jy4JiW3Jccxz2TZHZwAC38iFeNh2eMqlvzjT+jMSzbko99q7EoqYulw4oaG5cHovnpaQcdJyoaLxV93Mjsj1ZAWuAZimKODYtAFjqRMv1I+EZ5kLbuVspDOFlOh8fCjO+FBJnqVe+o5Dp29a+FOcJz9/IiUqCVz4ln8bpyhQF3rOfEmSA9WnwxxJuojAkGzF6wSzRpYz5m0CEM4ixLaqJVU50uYAUufeweI8Uwq91STo0+JD+eS+6iya6m1gLhAO61nJc+T+kODEFUGrpAvn2RNRfjnYvRwaGFxzgyV/S4Zt5D6nVgePS4KPzZy4D6nI2r/wFYu9iqO92exus/KIg2BozhCP3UItvoUSDNvITyjKNc8C26km1pXRpaeKS1YabbrXIzFfFCr6uykUyJFxjgGljcVVENKSuZRIVgHGIx7q0T5vVW1yZYpCKsafK4Ga4O1nXD5IWCqdEaQRLMwEua/4qSWTRFGc9hJsnnF5WXmptExeBqUqBmfbmbOju6Swch8iLXoWkqbZlspva+yWpkcEcILa9+0cu/DvMEOp1gnyXPpzOwxI/89fkveN2xsGlCMTbRiQGtnJIQEF9MNPdpgkcVwp2EjH5c3CYvWxnyZzHy6FOCyanZKjOGjigrVHk95xgpycZOVwccsvyrb49/lDZZjmUF/UObszhor/GTSwIt0Ongk/aEXvday3IvlZsS+bDEvMveLzEIaPM6mN0h9TatYbcJAp9FHKVoYbsRhx/axXA3sMYa2GohsTM+byISbG4FWAcxfaWk5Gyjhymelxhf4u4/e6PatZN3F1PD59tp8sJn+t5wzRZE7W8sXtwUdBqcg1sHjyG2Kv0/+3Bcp2CM11fExqUPYFUurHQ+D64UGjDctl9D7nlo74oNKnue9beD0yVDIIiSjfbitDZLz4K9S1CrvKHaVfKilLsMoHoc0PQodMARHEE86Q4lw/FQZ9BtsMhofF9vn6nCUXjgVa5rh6+Bg8Ps9j1RGu8ug5UXf0h7CAPyd3AiQrLSVQCIUY0zwo5ZY58X8JWlhcG/RHdmsKjiVUiztNR4eblje/Ch16VCCouNoAL/sxwvehvwGcjJQ+izRLlFyR1ZdkYQTlcN6PKaxjN2L0Gs+W/zIbFKlcZO6QE/CX1zP/uEYbA0ad927S6Y7ZnBQvqqX1C3F+aGMAkoh4iKuZlZX3IHXBYp8j8mr9HNXnnA6lH/ossrK50TlYSHvCFaclf4laqGYZkfv9/elTtqGpRtB+QqS3ljMmfPUJ1KQQ1udUR7fgaDynoCUuLvQ6auEK/q4SRgD', 'page_age': None, 'title': 'San Diego, CA Monthly Weather | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/may-weather/347628'}, {'encrypted_content': 'EqADCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDOPvdMpZdJsl3UF/ohoMqe4r+1GTUdGwcV7TIjAhZD8lV5VwtUadQWx1+P+MsANC4Nfucs8Dw9nz3PP5t6aNXOhoQg+Y27vn7/P6cegqowIxwOlXB+d2+wREz0LngHCo3rF9j8WYPo/R8joWu4lXMDX9M/h6xtKakNrYH1CsIfr1D1PEgAj4mNYnBl0CdKzkAMURVo1GxwSWQ6ZLQki6K0HS/0awNubv/wgvY4AeKBBUAz9rPQb3/CvxVyy4m5gH3fMfL3ii5Qm1B/NYd6mE5k1usD+bcHvDlqcdWIzbKbTknq0cHdsyWcCBD24i3aGVIrdn4ZS8GtGpcRzHivxmgSdJYjmWS8J8CEuyJ2UInx1tS2ce6j2yDDLD1rjReg/Z+WlcB29jsshT3PuoYQUU8xeMOYAiWCAqj1SUhlBkkRGAnvQKeAaTBev31WoL58KYYUIXXcZ8Xxh7LKY/07l3SK26NP7z8D9mR84zBwP5q+Qd668YAw==', 'page_age': None, 'title': 'San Diego weather in May 2025 | San Diego 14 day weather', 'type': 'web_search_result', 'url': 'https://www.weather25.com/north-america/usa/california/san-diego?page=month&month=May'}, {'encrypted_content': 'EvADCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDBMMvcRxZqFGdu3kdRoMdIHwa0YRIT4QEMrwIjDMJD1oSGnIjElWf+T032ZuDvcdJ7OeR3NShVaDvnPgBm5/vLbsGOzfBsE8s9WzQuwq8wJ4R8NViMl3RWoFr9fphxG85fPK0rV7z0cl8TpAs0g9X9nh9pHW/IanUZg9FVFJk/eEj0QbFrjm4wL4rHvRIsiTtRuuBfAf3WIVbzUBJ0SZaGG7fNT8fupb5aGFjcTkzc/NJGIGGr3y2c5X6NUg4uuhAT2bvIpueZ16N/b/7tRWnC9RObDHV4cIgST8zM7OFVzZVbWDLk3Z/lVN3pjpnm5kWtAN/fcsNBdPErwqYSsZKZ2IytUIqDzF0LJXUguLGnv89VO/bx9COUJlhPFuZDfwgEq+0ikLdebVnIEPU6TrEXwqhPbPaR39bWQR7MNc8lBieSh0V0l0WttRL4oQtS4580dT8YcKI4h8xtC3FiAgpKz2sv0pRLSPLDWcir+UgGHbQpG8FxxhNB9iY9smhgQS02duQxvZTBHm5dzJiN8gXVrxIbwLFLDXjqnMOh0Nla2U9F0pclcFFYaDtaqxhyPoRwwQi4LJ/E7m0hhv87tHK3Z3sxgD', 'page_age': None, 'title': 'Weather in San Diego in May 2025 (California) - detailed Weather Forecast for a month', 'type': 'web_search_result', 'url': 'https://world-weather.info/forecast/usa/san_diego/may-2025/'}, {'encrypted_content': 'EsQYCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDOCUUnWDmQihIcuoYRoMO93H+M36uFbMSli9IjCn3HCBevJSKDbaAN4okGRDsHJqhy7oQzwOIjgA7zZkxMTJl+nTI/iUiqudgP/FE3Iqxxfl8JbIYI7K5ZdJ6f0Kw8f2F8QtnecP9uXTX2JGpWTeURzd2pxRgT/c2MLKbuWFZ5uv3C3J86wCJcq4poX9WSHboKxf+5cN+8FpzCewX0b4Slqg9zF1reSyWNz7O5C2RhGelKScAt5e3W2iJoBtWpzElmzef3jaoCAsvjkaQ7gg1NBXq2b0Ge0AIaGE/2M0WLs7zBv5D9I9r56V2BBhnA3XWdLu9z6+PlErh3/8QXvngu1irikObgnYKDH/luX5Fs50rZovqN/zVP5m++IkqbzHrsAnZY50I7g+mT5ExeBuq037jA+kzwsVaaIjRDEozNtGQi1rss66KxNzy9swAmc4K0WkUXHwUxsRZSeRTQyk0iGSiIjSSjgwm/xo2X4sl1GB3iDlam6p1eSqJ9/aBixUVD/9wjYkxb1tn5GNYFZES/x9TJCS4VpUYEivS0KchWLy+fGCozgHCyXQYMH4fWA7x6fvraMIUrs+eVOzRjYEAfxgUX/7BfKRyysIeknNjFKE3Nab7HwSWQNmfMvgLV0VWtJsdhMwxATmlB9p5i4Tq/b7gGuj8sd5/ygtBApdNwoGivs3Xan1Nl1mX4wQ5jLmdOJpPftiQA+gj1GOGZMeEiunnxP3Rhb71samqm+UplIpltVeGFPux3cv1FUWqHRHyniSBH2Esze6r1x64fF7qStgrNAZHGSVtt9dW7vEMWPXzgagNvjm6kX2fLp38pxuldmdWOWDS8qozjCRsxCLtuRZ4/YLgdZ5J3SBHssUggIG1L2JmsIvEb5bTRQy+r35S0u34o+AX1WUY72sWCWI2klKzolBRpD7BaRG3xGBI2CVPgFHKcF6onCC0W850hNC8UpOgwbGMKZEGVR9rt0iErTD9gE00olptDxOx4Ol472fc6IPENSS/IvcwApWLVz2GSkIvFHds/s3M5OtszQuPsIpcs0cHN/cJCap14GeE7TzCErQs1tPsitFvZJJDav9RPH6lZtDspU6WNNQSJo2K6j8VNhQGDEodJzf8VCX62MSKLn4lnsaC3m0075/b9pepz2AqLNf0OiZZ1lVjrW95lH0CTlKAeMGdA5Ch0k2gbaPcK+yYNRHoRop2d0MyCqGcRGDakeKcNisZy6/NiuCc307yN9appEumoFIHipR7OTwJGejBOc2lDO4EpBM0VtlKrO/rzBoh/eVZrjEWVlFTVC6M5bWlKiWFxh/k62d9qzdnlRhNAOg7esYSTPmky5kDZUJWkdwG5wNSEJQaCeeT4l6+v2O6XE4cwdMkKnQ4nTn9iNJTfflxNAYmB1BO4bxTR9nnviiIZ2NYHmhVlnfzR5zaeIUgF1QG5T5gBG9/n16gNOIuUedDdsPO+oC4wKZdOcRfu5wdleClRwPxF8rbQtApmc+vUyjLFgqir16bikY6B9nHjtnv2iDxifW+1BazUPK0N/4QJRgYVnuuuhNM4ozacH1Uf8eUWq4Oem2r0fnoVws2KH2ycjefN03utlRwufRw8/wFrW8yT2RtgeAzDXnmvMMJVS9LSKLlYI9/r6Ee3WvuDgDUnfj2y3O/feREhXCHN4MQYiEHmpL/DlaYZO9pUJQ1hxOdAyfrQlw5uv+bZMbu4gjj4XYfVfx/bZRDd3EnHzPttRF+p2/Sw8iiDfsbPch/viw6NF5SZY/1F3r86ueQFG3kBdG8+R30FSPi+H9EpdnQGF+BWnatlvrTFx1Yw3v6KSYyW5L9rk6aV6Fy60fhXN4ZS+xm9D4fEisVxACiBm+WMfOUB3d5kI5uoKY5Qg4Hdpv0Vtm3xJMk2MbgNyCOQTTFGMM8LODMolvlBIQ7rj8lf3GGH3CDSbajGsRVQFsM5kMWRgXhrTQeK0Xy0B0L1forrTolfExgNaP2NdI4qPOPBTAU1B1BuYHmoOvUBFJ5byqeB4x6gtcEyo6R50wMjoARgpr1sEzCZDgAHl3myjuQ+QNnSnn9ludNwVcStj/qFYEDQLzL1y8fK49EsXdIQzDFklF6koE61607rdku5q0JkMNXFFMQ9Eu86oWjyK3LlfV/LBPvyrMF9cSg/Ne1hiSH5vOCxJT0/OXhNjMvc6I/K9mg2I77dzKjIMLty3y/V5h9lSfa8l0cvDsxBxuH8PbpUdtEaP7PiLj2Q5up0w2QxIBeuinUbkHHubEB8CWdDfGgoGptaj9wVOS+8WblJrw/0u6IhSgrClGq4dJB+SSA51lrG1zPOssp7VPg5A0UoYhgWF/TaCoe328PFEeHA2e7SKBAxIRf065aybWwOJqEDE+n1IwbsWHkeb07s02NMd3W5kCuuKm8yk1bQBjWNLa6d5I010iyoyF5MGR90xqmcHT1CnE2Z7o0MXOihdjbkwTKL8Bybved193WsNNCl9GH5t76pJbAMpdp3RwuVt20SZsxtFeoFXHxTJ6CpfvGwj17XwfNb0LimO3S/VB6hcySsZfTaF/a+V4qMKnzijaQCmivrIp1nc8iCjYNq5zZxMPZNx4Ydvu7x0Zc/CePkVM71bVBsbkNoN2iEercb23SQLtykAquSksiZoRVgqzFrCBOmWgME6qN9deVe3Z16xyu3xF44j6RRMub5yi0sd9AUEBbE9l+eoyYMg4WRFMCieQvl14RsT2Am9QHVD2wm1n1TsrFgYXgZbeLeOx+sO5PWX/yHzKNWQw7bTlMnkqcq181Ryii8g/oFyhtzX+xt08X/vByL0s0umb4qS/KaM/BOsUYy3P0vG8DRkDo2awTo0CIX2sDW/QwacsvLmhWO/AXKd/JSPkr2MUtpK/KDgeV524AKis4Fe4qgGdLqpH26c7Jyb5GaAD6ljTan1SHzUAvb8CLISTcYfBaGfEyOL0BjT2m5vEcBgfmfysAcPC4koOiGx/goWiAnEXtrkq8lvGFHYlgx1DFm+2X+0Pn/4Yi7IARzylP8NWgY7T1soIRqyynv48UJ/4LPUXWJ9IkZLVxKhsZn9elg3CM9OLeatrQNJAlyhjRgQdiWou4SkilEw1Gi8D7wd0oTCOL2M5/pgm86qyNXLkTJ/mki1bsbVCLSwe0iGfkaGroDPhMGxjMm9HAd4zQDb8xny5MhKwe7Be6vXqTIhjMsr7Wh/kVuWZS2qolOeuQYuyfkjEpWcHGxdrxlsqY+MxRC3YaqMleaUnvGawMmgMOKrl+qXnGMWRJvBZDe73qYXa9SNBBsWDk4PBy8lX0rSzl361W7Lgi1+wPdteRb9noKheRjaF1tvvHgPZGg4UAxzkJmY/01G92e4x54c3fnioUIl/wt1869NeH1tdKohVglr61Yj1RuAlXoGXoVVhuehUSHLtQIhjucA3miluKJLF8atKzNiPce8ebjJsvYCDFXSm02u/U+RFnid40fcBZEZa3k6f+I832H84OosZMa2sPwEOXBo21S9hhyCZ39qqH7k7PtdwTfFGwflvLKkZQ6rQU4sM2lS4zw5yT1uPpkR2SZ6jhkN4C4lfefsYWEic17UUsPf8tD9M6DYBGp/4x/9JfB0dwtMKUBBBlVLiYhi8QTIvEHx6pz5u6lcaaNNjSZTCGjhZdK8hmmeFsBiwsNvJW3Ng4UhK3FV3eT573YIbCsz4dMkVS2AOeij/coTlrIjlzj7BGd5AG4cZ2cYIqJEob+d3NcCI6RDKZx81EG0g6iHJVk5UJUF3nh8F/XiWsEhyTz5RAap6ztVpzEmhB/jE7ER30T+oz/GLPTtiChR+jKTouXYee6xOkVlpR+i+81XQap8q+1d4mMk0tqPghCUz8PNiFA1R3BbIHQzFnGJ9goj6Cvc1WWIPlN6zfI0DKWRwUqkKurW7N5UsW3m2VCQNSdHEPMdM8OE7kMF6BkjnX0BZr8ZjtZIQnPLnnjhj3nc3DDEptKANVw8rg1Y13SqhbRI4HmGX/DAN/CSUsdIrWPC33UQT8ALWmhMukqupft53v/kQ1Pt1jY3dPPsCE8wV1D5aSeNIhGwBygf0iVQqrKrPdagggn5af75MITYGJhcc+iCCOwUyLtfdQq0h2w48F1+hWqwkKDEYAw==', 'page_age': None, 'title': 'Weather San Diego in May 2025: Temperature & Climate', 'type': 'web_search_result', 'url': 'https://en.climate-data.org/north-america/united-states-of-america/california/san-diego-714969/t/may-5/'}, {'encrypted_content': 'EqsDCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDKswBpI5PBCHhagrMxoM5HARJFIsfVGdS7qFIjCseNNQvjP//+eNrAuyvZpQ94uiK+YcbGxrTdcbxbgHSYi2zIujO/f6ffvzvwpMTS8qrgKVV74jYVKqW2Y1H6YB7EH4mGoJsiZJEIvZ4vX+iH8EcYUtwizGhLM0lDYVFrdu7rZJV0J82mzw2UrPpj5Jd+jFLi2i9pIzM6ANECwZCe8qRLrYFju3SvSWElOIiyQ+xj4kftxqkpOQhw39e9Ykew5AMiikZo0u09Tg/RH3cvxFBOKbFErd1wptoSqjD5KNwQfWyCY2p+as1fr6dgEjo1AXEwCsznXiHnCtgVd1FABZx0+7+oAgBchx48ztWQoRm+hinXvHl4IXG/xvDhKxiy7lkbRk9TUkqYEDR+zg3RYH9ij9kfMH0oJkttziTdj2lMahO0XL0Gbl/C5B1/jLmEz3AtthEdVnKB5FI0FuXb5vaecJMVvHWcqPOQ/YHPTGrY4Fj+9ATFkapaJo592UnxgD', 'page_age': None, 'title': '60-Day Extended Weather Forecast for San Diego, CA | Almanac.com', 'type': 'web_search_result', 'url': 'https://www.almanac.com/weather/longrange/CA/San%20Diego'}, {'encrypted_content': 'EvoPCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDC53gNiaaUYeNhFXIRoMCR+5AH5Wk6j7TnwKIjBSH84wZXRHOTprG6RQFU+lmgwjGf/UUpIWorbNTg0dP2iQhPnMopI+DXwz9iSk2Lgq/Q7njRCyB0nEs/ysYOqSuxJz3yIduoXwLBHN01w7SBh+sUd7dFbLAfWNmdWCLc8p5PbEfEY0XaIW5WFUHztzbl2xSWbYmRzODBFVtwX7LofGuXT3t9oesBNMBv+hQU289z2xBYnR7KZ2bblUrGhxDm5ggS0nxNNl2iF/t7Mn9sU6e4WtAuB27+R/DIPmvfamQaJ7baK0qdVjIAsFrnLElxLXNg6uqtiEPFCWXevkaP+KktUE6ekxqFKHEBdRWIXuOXqLWMsq0NyPccFhv84RU1TK/Dl/gmvacl1td4UOkBzqiB5XqcZ/YQqEOqFksYbjgO7iTEo+27kP6hwpFSurqcQp7QU5fBPyxwHmWI0JMTQBsrgVRnv5lVsqPZlOHRt7LqgcB/VkR028kLE7eCL+MhOE1s5uTdA34i5ajhCn1OragomvmrYfeGcxKRP5/0wE2uKo697Um0m5xLUnVPnQzoO1n3X0FTsP80J3lROsjzes1CtSNdKH94vct27ySoxXVT3agNRzQBfqxgy/EVn8GavpiD94vrJ7KZdQNFtTmZ9M9VVYkECrtfBniYxgreXIDyvSAYMQbDG3tqfzF6HMd3laCHfaaMMUFy1F3QAm/ERoz5/dfVr3BvInnYr3cmE7Xs5TkElMWXcwl9zua3xlHeOlgTsQXnyiljmVuOquO0helKusZxiTPImYrOcK6pi0co8xD8nIoEjP5Bw1Nz7LR2L/DbFv0Masa/XiN6QNsTVmUYLcAbqdmDfoCCROYE09c8XIaHWtbZ6MbXgKuU2PVFjM32nYulH0m8eKuxQDMIDogM0DDYzn4ERUGK29Frk1YvMOcGcHRNfE2H664ZcbM7ZoMYVE/FARQvZ8mA8N4bPTJUBguFcz7yupvcIJfBMZgq8HZ0d38jTqXov3WMOuE6Te6Ms7i4iMGliEzp7NdT1psoXfQo1onuwWJBcr/xpoWRZEF4rArFbq/fji/hBp3keplZw2rWrK2e+Au0ZlIhfq2tcI7htEenDYD16x6oi+cO+D3i7QLIt5pz2TF7hin1/xFhkhiAxvg4qLd0cWKrJIqtykc1tUDI2q9zEZtw+2p5FmW74wGBFolN80JBpXKUiN7iaRmUNIcX5Y03CYST1C4h1YbLVYQILR5TCvy8TzrGPJ8i6eGRn7pyvPSi4YIsFxZbOKCDpqsF6iPBif2vGk2+V96DvIaqwi6IvnscFuxx22VZqzYLbWQ/s03kXipHRpvae+PIxerO+6NXBt3B9EeYMWxDju3l8C4oQY9P0XD/4pVNzWw+LfAYI83K3mS+uChcs25bmndOCIOYJCuF4j3SNTVNV0C6ibJpsG9gYqVDs2VApvdsjsQ5ubOETfj0BZ2EATlWzfKoTleJC+DoxKhFJ37F8v4Z4XGnRy071SCD4hOPZEmJF8GY4LEBKYFmLubEARr6PjI3OtCrLgjt0UFJpNOtiRhYb3FrjK87Lg7Vmkm6FbYeGQrYhWNdQM+BtSo6B1iSEjOiQWFIZi8Gx7M+ncTe+vNk6ir/rRx2uoTD1ax46ZM1GLp9nl9BjFZJqmUrMCkRXxF+DOQjRqcBZ6VfOJqkvZRnowPIOGbHi8RDgwjRHC99Wf5ZBzLYCdOyFKGPzqExPtFcN4eCAVi1o8vzIcjsiUVRibWDnswAAhlLKB6c+mNNCH6rTDNbH92GFZH+qG+606YYM0wyU00H9zyoKNMXqINqYrjDYF/gclEdzOe8/hjSWSbnUk4RtRqmO0+N0tAbum6dXJ6x6+f+K77d6aCLJUekwCaZ0FA2bYQdq4udPOLXt4fEGaFKdiCjYgBPsVVOP0VIy+/wQQBvnyEAY0q2hpZ1IKxYQRjmgPJYjuFUufC856GV1uGUjgz7nug+FvPdn1jGAEohU/VXk8Lt/CidULbOuyjSUi3fQogc7ugDH9CsOcZ29pweFlIsrny4C4W/SEBBqCSwbpgQbQT/3OLBxLxCFwo9WGhUZWpTTlnCsgQm7RCPD7xTBbJ39FQs0c3xLrnmlVchfaKUKmW57y5HRkZTdiwaXdEXiGkldF4m0FTwBt6a1AgveMGTusG08w3SWn/z3fo/GRiIJDBu59O2UBBKOe8VhVNUmJDgW87Eb12jgYAaKeDt8my1kBVCgBAmpHggfxucPG8okGRcB8vGnbHTT4xrAp5vFXEsQZM7jtjzAn/fnIUcEtHF/Pmd18b0AgfhbuYncek8RYby684ul3h4/IjDcZ9aR7P811H7E7FMMgjQzMnm/aWeI8n4jCLfzBnnOFHyN3lu+s49QmxOtekv9BZUqfZuxUxR/sSrLw4XqGuaYtie0+iCm72fP8vzLkO9aJWtwRZRHgGa3w0vaWn1PGZsc0+JbJvF3VRXwwP+lQQx3FKkwsCEtkiq9kRjZHo+F7mCPyH/OVd/fKSRuPucxg3iB4aHxjz94MYl/cjaW7bc8nUKLN+k5bx95us+jbGtkpuo0N4QaJ4f5iG93Ja8+yYtqaWXJcgEvw6jJ/uqR6ecdv5x05C6GE4J9rwujP2+Af8JKIDOGu3Wb2k7LtRNNn6vbNKb8YAw==', 'page_age': None, 'title': 'South San Diego, CA Monthly Weather | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/south-san-diego/92154/may-weather/2178586'}, {'encrypted_content': 'ErQLCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDPpn3zFUZF16tX212BoMm+p2maaFta2+nnP8IjDRlD5Oi3URvUdhv+DD2G1YJR+IlJ/KQcBCW1qKp5t11gAyWUUCIJ1i5fwbV2/YN2kqtwrrxiPIA+VdUuBy0HaDwCUQuG4kzkreellA+zPzQxLBtHsmVmI763g4alslprh7hZ9yUsK4zce9zJzRysPg8RpB6LKmyfX4v4xaedegIfkvhB0VKwNiSbPX9dXTof7WbyymcrgBU7wOGVyIuWXV57sl/eNs7GT/LUBh0xkKQqSyoUiPk8fC50iT7xOuWt+Bbgm+PlAmxAinpz4Owiq6mD9lMusN62q5Sf/cmJFhmRYxe55umY2Jlc2Jeu0WEGnaD7r3B4Q3vEyXWAF+hqB7MbQlS2DNs0Fi1K1kNWZvHKfBcYqvT1Ecekugb/ASwG297OZ+lr2dTByf19i9D3n8JuHNDtX8w/B/36SnM5RsJW33fJibhHqdng7BxnuoUiGdfIfPvtP98brBMTg+LRp3MocNubZkUU2JUoKeD3XH8ehczJW5TNJVcIzmp8BIbSlcOS4J0ytAiXowDJdvJDoRBIO8h6JSIPzBoZVRpNIKShywOM+WXn9Mz0RPEXsuyoQ0RiyS0z1mtsK9NndmDa9ocTceDSgw1C6zCWTFb3ErbQ178land1ff8I27VZC1A6UcMr0Juqv5RlIjTfMyHgPGyI2E4r2E/DYxIvq4AE33WVFJoW7XHA5QuH3aKJ10xIX8ffX1orrPmmP81F8xVfwroaMAxczHBX8BFMFPombRIWdF1u5ebIDnd3/B4okdCvEF7zSiTnni3I0ZU2F0QHvxOMzYoysGZc00y2ZqRxhETcLk4u5wlnQEEXF5hstw+XazVaK54Rhg+ffnH0BuoK3+C0uyLxkKlvCmfbcX7agRYKxV53f9QJefUzYJQOERUrSs2exFnYrt+V/6f9I779e8G05y/1dx9Aj+5oquDuApdzLf205y5eWafvNte9j94gPMPmoWgEzopuMNiKUUvDH+4HgG04ICsmkBPVKqbfjbq/HSgjx2dcWWLUeZobPHh4TxzY543DydCLIlkrehRxxw3RQfeMM6HMPySs4XlLYGkdMBB3IObJQjfxOhTlIKZ+myUxWaaEW6DsWuYZpfdqeOBZZeriMSTfH10FIlSpB/i7yAZM4bo3xLIehKxb97gG1FNCWihyQLR3bDnfgF6D9gOCl0znPNWsPuVkbuOTPibA2NW4kuefnEZeWiqe9xBFup4EWP2TlTyZIGAM22Mt+kxiKqXjhWHRvEQ6850ofKZeVr2Ia7g+atlpJHxv3EhNAXwTGjJKLRjj17nAiWA9Xpubf5tnFCSZaC8DK6GXagbPVhzOiwOJ8EvD0ZnQN3h68EncKeWVdXYE3CIKcPA8O+jE1n/1cZ9HU8fbxJL0+oMB6Sl/2lAS6NwAmrxLp6TeVHsiBx8CLrqN5Ch+GNPV/JdIlK4gGEhsfIXQine7wXZowVCf43phs+DHL6VyIQ4DnKd7/BUZNayMef9qZydxGtCPJ6cCMkFQnCeQQZibNM0nKqpVTM42JAPVPwkfByK24rHqZjOxmGCTO6TYksdhbG2BLYBkDwxRLOg1uzngWMmfABpl8c06XTb0lC8JygY/ItGFY2RP1b81lVHpgEV+Sr60babyn4bzjaSHBcSroEuje6VAEaBSO9UaHlRNlUtsKDXJgpotod14//fTghm5CBpWhS2q0/k5G2/3ARZ6SdV03U1kX6sGa7oT9qL9wDuAwfe3vcQ4Af8D9Pu/jiNxagGEzOk8W43hzj1HNuUYrKwklDo+YKp7t8h6l0dKSeQ/4urFbgp3v7Yc1J0RFj9frOykYHgcb+EN75MCPn68fUQKu0zzr3+aGtVwQYAw==', 'page_age': None, 'title': 'San Diego, CA Weather Forecast | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628'}, {'encrypted_content': 'EuoTCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDLw5LkY0v5DrDsNLChoMCmQEYNKWykQdkaCjIjBo7Jigs1lnme/Jmr1zMXo7CKWWPhrY4/r/uFsCvpzA4GJSQPFymQAbu1sqK8KVt+sq7RJ7+vlyP8dHt5XDznTd4bygu2MrhxvesSKE74EX7tJvGUBN9ifJxTLkD5dFnwbUduUp3CYmskQ80MIPFz3mlOVK+wU/fq8smVXPr9aAbNF0lKA4JO+vZHB5nDFrlUIiBnqWlrnAzmry8iqgQ/T1I9bggOK3AXkHGDTMYTgqAJcFnD2XQJiQwaQ8EQ9+BDb6H61V9G3VWtOdgC2kkf9cmZMPG4eBBL+azuhxLVECZwd1wLhmWq6dBuoniTsNqc1VeIGJnkCGeLqAbTPJOOekkYvIe9QTdSCFyKKTWDVmDSaD8NJ0m8ExwC2mwd8V7Ay5HIE9e512MxzIuxDhVKFQBRP2Fzh+ZYe8RjRF8yzhtfA3zdYC1CwWdnkURFGZiiguvbVrmml5bDG8JPz/Ac1EElkwV5Y94tGSWr2H1BYvTW8/J7zz5agsjGsM4cgruGoOQ2vdKFfUh6Y/kTUddExm52GKxZWRxqV+TKdG8k0yrjknYgZH+ON+Quw3vNSob4Vnqcz2Kb2f7/cvBTvqIBrGGlBL6wY2GWopbzAX7ZPi+RIja7amIqaMcNA6dR1YpPlko6F6C7Qb0FMmzzewFG+83MMguaM9WZgPj2UR/JlGy/6I2sOgmuEJKXN61i4dK6QAqiOg4p0wcihdj6/P+PygCUeqIMJJA41wW2wfmdDVME8p8U8+unGfyhAH43J6mwMFM0zC3D3djBhzs35w1YSlvzp+Y+2j/xpI+Xs6o8HvGA09eGqtFxdrBI94HPozJDZhbUl6di4VoVmzrlD7Rkl6dOGBBQtVtxiWEp2wvOehZkDzOOvuF5eqhSdSuNpVUVTiF/aCrGX1ckgypaFYSvbx4TaeIGVgeBNrxpOEMoi96Bi8wHcROGZqpjYdOOLjpZzsIjojjiDHIE0EwNtsVS0uLYPUKMb3mzjzdjDgeEaQCSerWIs+1hSjHSCVOT9lnqfdmoQad8D352VxmuCWKflVYfQk7icCJ3KZnWVIBYCBUkRdb7YXAeBhtt6Z8BNKSPV7/GTnUGAW5qlrgoBix7stUd+LtrFaUa0Af23za22d3j+Z7BFo9+V6tuSJ53pS24NBLidODsj2BADMm1wTbC08K8r1/N1zqjcC8q4TrnPgRW1i91fHV1W3Dh4xIaWMRN13HH2gMaf+/JMjSkuI/YLGJDTyOcKfRJiKte3TiJa1qvR1eZMW+pPcRYW1w4reAML6wMpts6lonng/JOT89buvztAggjPHoK2CwyUSzhBWvRQ2fHFq9mC7ZHdX1GDZ9/YtihMen5/7CuktQIxmLdVmc0CiU8w+Oq6tDB3kHbXcVMmjzmxk7Lw1rZYuOWJwS8CMQnurS0NGUB2xpnK3gkJ29caOMgieLNfkRwgOJY1f0MdsCFtIPRrEt1d1tPqq3UpeCc6JqwYv5mu3LBeQ0n1TjCH+10OuC6jjpQdXOT7yxq9Fuzji02RzcUhhqV3bhvlc0mj+EuxnnJyL9+IsND3yliV2bVScLgMVAOmnL3T8Oe46+DTybQDipSF2cpIfdinTiSh8b/E+gA2rlqe0ZuYZwKvNtCdNVVuH1EZnmbL1xh4+ATuHxvhRBlxymJLiTSndIyejS7xfykGar5lZc/ubYTw4Mt8Rqo8Nz5ZObswzQjYsIYEWdpFkJq5r/edpQW2Xir5toyutIIkfFQiM0xR34jTrkzOth3NJ0aTUSPH8BaBhqnEamzk/reCNas6LWMykQ2UpxfmrqK4RrMYiujAqSCwYhKXirAg7xiwrgG6+zGcteFCyqSLtHpyrRyTOhlbtJJHxnkDmeoM25w+G4LAsrOLsTD3/yjkrAeLbR6s49yNE5uxxnp+rpEZa0yP6q6uh3SitwWW24ykPAr8++ldno2cQ7HMMxuhATK0MoPpyI11qgK3PM47KDOvDoU2CLtWU/lYhM4Zs2UiYry5x7LxbLNr8y5rL23dBMjup8viWCpBFG5OvvcepZFzlx68sRSZhk/BKe1MI4FcnMna/ZAZdnsjYaP4KuQO8V5NtVvl1G9fw74S1Ci6Y/LYgCJW1n19S0YnTmkAKdx4GbzGu0Su5j6YzQbiIACfy6dnx10Eg4QSlbcEQMB7su+ibdjAP6tjgXhJus039BJIInWWC80nDGvewaPdSOEuuObnCg5Ti4Ct4fLikpCBfohzJfQlM14GudRiLMvyLduiZI0yWX9+Vfdtgcl2w8DdSFyXOaEppaK3I+aWgTASTutCIMKvyllFBj2anU34bafyQ0IsJjrB+IuxLNC1Ha1HPqo8YCAs/+NXb5813nSVN6ZIw2NZoZ4DGtjKcMxdqLz46NYff//KjDMEp/ah7PbxhRhQWSujrgjInihIRUsdcJSJyb5+9EWByZpfDZ4NK4qrf0tZrJbs9FwCC0jOVWxaEutI7MhKlF4EY3tIU4Dm+X84cJ0rY4xQAqFnGGad+jYZ/AijQCt9hPwrxfRRolC3FU5jUroi4bKuuGMv2TrBZdVNhuatjXbqr0YQ+ua29BEi+kGlmlXQdvUd3cL9l5n/JNPf8YQ4JMsf7uTqXFXUlo1artFEv1qLdTkZl8rQ5zUBcvW9mWZqtBZ6cnqOGK5XFoDtmUJ6SCv9mmVp6OcAw7OGiBYJD0jvYe9iDh/YpvHhzQUwyuoqgaZR5z8skhIeOqdmfnXsKA9EmsBWpy+02ZbX+swveP3vISMcrPM5M4KCfHcUDoVBS2qG8eC6YWUWXCGlYyw/7OVMnfmgouplXV04oxlntQ6vYAClGpCu4oZ/qJmMLOqNh/oT3dP+jXzLHTi/JD8NqNeMwxsyAWueXwzcRzGDzNFUE0BnSaE91xHLXABY9YGBTkQHFsoCMWrlUwsXcN5K+mF9dErSv6tkA4pBAK1eeyqMmy89emur6CKG3xDXhiBR9TLIciO5z7bp7X1P3N0OQXFyvHbtE+XIxWwzDRvft8AdwGeSiVAvFx48j5ERkn/JDaEieQiZJbqBJU75aGtWot9TFiFrJT/LCRX3saRUYOLzi5sXPOokH+9RSN1YdJpoM5MMDgLP9/safc1dHbsXNABJvebLS6w1sBmGosAE7KckDMljcZGb3kG47vCsCtjL3Tj+F5fuzD3giMUm8AVECfBmZgFHSq9+EAEZSIFYdOnVn0IyQBdr4mEV0zaRkqX8bT1ZQgqKJeyoGUjZIdf3Gj6Qclvp0yKC1k47dFnu5juYYcsOQCpS1YBY8sBtz7s4KGAM=', 'page_age': '2 weeks ago', 'title': 'Weather in San Diego County in May 2025 - Detailed Forecast', 'type': 'web_search_result', 'url': 'https://www.easeweather.com/north-america/united-states/california/san-diego-county/may'}, {'encrypted_content': 'Es0LCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDClQFVKj60V8LyvR3xoMXl7+M3KEUBCSjLr0IjB5fRtXTtb3pziTsXq9zblo8FswZij23kawOvzNx/Ga2c3fiGMrCiBhJKlSHquWzRYq0ArdVIlSTk+zKNPKvb3gFtTQCmSc4ff3wR8PxNIcOAIY5tM3Wh9xSQJjSRk1ZGet2ujaNr5pZ8gcJ5QYnwhTHWuvdFOmOi4QRxW+4CVYkeKgYkeQUrwIVbnKwZn39aNdByLesdfwAnk6M2W0Cxx7vfcL3v8eN7GOtqFfVL7NzDIjI/BKsygbwUJ5MgoHxSENf3d6QBgL1HcSaTMTucZkGCJpWR2afV+1OUPGrG4NYt+I7abPjKO0dnWNMk5rFNtMWsrJompK9vm+dXZ239l2GDREejaLhUfVprWDMzsmE9mswnh8bmqQ3Ay7Z4N26PgUbKXtVWH/kS7t7nVoy4eSkaX4hbp/ReU8UsE9rCSYhBozAQcXl+Cd3Rnpm/5cr+tnIjOKFiooFY9rDAJf3oNLIPIQuWIGEt7vkg1wbB7NjzZ61Mg2PXNwNWp491ih4azLVskBU4mM3TZfMkTBZov9Jke5FB/AwCSw6zDWgqzFEg8UJwL6YeyPsqaHXXWLMdO6GjV67c+ce5S2G7iX59xeCrS3bUmNEtmIAlBxhI3naD5Fjk/rP3xTB8CiYvjuCMui8X9tQysNgfAJRTglPB3PGDhtNZI9xxuyT0l8ooQQAhtHBTQENIexn1PgxTPphkom4p6MUtpPza8CFXU+WB/zI+qMI2BOm+R0WFwPfteWD6Rk8w9dOfgExmxWvfrCz7RWp6VLYBdV0Z5MuWLMDFhIhRnRbBs4N/uIJCQNW+qn6D+J5KRMUPMMEf6ZyoI1AwZbFvbRwd3RaaxWbgOtlA5O0XgeL0ShaKY4zcStQ2eeS6j6X5/hNe9yfdubnTRHmB66noQuXSUSOeXyQ5kUeVGTjtfb+iaig7Hw7XNdancG3h3q05wdJyNcebPZ0ydoeEcrri03CCgNfHYnoVJW+G/ZKgT92sWw6ufYQqTZOXjM2YxqUQhphJFuMV3HssUvYXt7bMD+c+PNcQPBRk2q1oMvQuHSg28l7aPhuJj506Fbr35LZj69VdSH/G/9V2M1HXHZEAl45oKdXgv/gj/kFVlVj8yc8osGUWlxMyD6ZO5ivBy41J8NWu7/K69lZZc5OD+u6XXXrskWvN9U0NgJbkopK8ssfDoQOMrKjIEo5tZFFuc3IHL6s2klWh1Ceg2GMbXri2CISd7EyzvF7n7x6R2w538wBNAnryF0AwjMpfg+y0JaoiauSB9LY8Ivau/rbi64Wy1rPR5ZtKC2xTHI6qWMNxFxwTa7iESWQu04MBJpt+y0ThpWHX5E9QuhTUlsZwSicTqTYnY+FtFSV3iz1lqKXIw5iCJq6qUaLkRrKzQZASiMTYhwnB1RTgPzQvqzuyD4/3LbSkIyuu9cLXWUwZQucg4KJZeBkNEOxIO2MQPWBFVcOqwPIGvvEYgm709IToYZoXO17kd/hWwN4cmeK0Pb7+rL+yOTTJ2/e0OcRTD9hkvBr0GHnZqKOAmwI/7f0NzlY94MCgarjDfymGKAbOLMoy0wzi/7X9trrCcsujchQK1ODarY2ZX6bp+JkemZbIcfoOXTxcu1xoKzmwC9BqW7IXHH3SH2fOowr7GPT2+UJDVOCyi9HsBd8E7LYdDlB7LFcxU6SPr+7PSeMmYiCFjrnp4oTw8Sw5zDmwdA8w72+ENJpPCjH3dVke+sFZ5Sfu/4yur5gLORfOtHwubivzpDrWwpWOa+lh1O6HF8+uOtvMLWI3QzJD4CY1WBb/tTro8dYmeTpe/SxBDfb1OxEeFnSLDE+K8JPviQrllLeN2UzTaNdhX/mM9saCLCq7PX/nvyGPTpZKk/yUzEauOBrAPWSR1DGAM=', 'page_age': None, 'title': 'San Diego weather in May 2025 | California, USA', 'type': 'web_search_result', 'url': 'https://www.weather2travel.com/california/san-diego/may/'}, {'encrypted_content': 'EtYOCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDGqgGytAhvjuSuxi2hoM6GmvJIrZrDzJWFSvIjAW5I5tQJPtb17fY2epdSGEtbBrTLbejFtzdfhqZbBoOr2KSFqdJANEB72OhGiASJMq2Q0mCOD5XYmVEVSZm8Nbi0ZVkNPs6dw0Lde9j3wxdTaBPcLsWZTrx0PzN3xnDEYC/bRAkQY3jq/MOnGEGf74dTeBg2nqq8L4U6EdTVMOMwZRhI4wL0Fwjztcfgd2WuVOVMHoZc+S6enPElm/j2s4VusDZMbHZUW2d5Q4LVOAM05UMyLF6BlEpsVMiTgtTZtU801c1F79GwUasUrKvg4aqstm8APbmDVoVenhTkkC8IHdG4vomREUW8Ne7deCC7JLO99CePe1JGk+n5QYIDbgqk9HmZA+stJ3uXheBTBjur7Djbl3wb54qr0GLrP3g1pEoXvES4beafT8xCtcxmolZ+kMbrJaKEOQQr4vAchxyu3sXxVwklyt9FQk5AREKMa0uq5JzXZuAvImQc6Vpgzl6OLmsDwJKPbArjo6X9z+ZGm1Vb7D3phlNlxoKz4hNOB0WJ8RM5TSL0pXbO3cELrOrc/lt4+Cd73pRZ0AYi/ft5B/dL6jv8JEJOnh88JTX2yS4NwTdvusq/A96TRuQfmjwIfUFD5Q8LAZm4fkRAkObFVRnqSwrSEwIRE07f2lrKtx/PtP4QxnJP1OmLCCYUnXJZMYGZj8SFWJuFxX+3qr1GyuMMbGCg94Rxp6MOnqcBvGof3nr7/cK6qZleqxY5ImzuIbwAh7xfmHTtITPgSEQWXKMXerHYlSoFTcEfTmn+/nZZuLsZKLRbxF8Q3Q04iOlO0TdlxPjAOlbd76+PspU1X4RQFKyT9WckO/yMSrMRLEQlqUtXqcO4WUg0o4aw00gkpaqS53y+UsUcFwOhHQAs4h3viG4O4PF2PNoXM51CmXvqF7mErFpXB/paD/elDT3Yl5Yqdg/Xz6fYzUFOO8odGt7jMw4jsr7pNpI282/CI6mLBTAgTrKmVeImPH/afYwcueaTp3DTryZBp+jdySYUp2GzlG3wgNbaSK87x9SIoUZHaCRRK64ZA5qwSq4xwVqJ5yaocMdaPqa4bF+FxZcV6aJEUftjYGbuUchtVPicviXxFrkRxoF2GsEE3sVLQ9Y2WGiiHkQNEq9a2NDGoX/WagDvBbIdfDz0/Uql75dFI+OLUftbjopaaWEOnsfDxevrvg7FLSU5g9QTcU/+tiEs3AjlWJrnPm6lSnA6anSx2U8ixF9r9AF8gejTc6x/fUWEY52pLxWcpQqld8yB2ni2iFkT8V0vDD2IFWjS+gt332ARpjN3mMtnCCWPQ+Quhv+EnZNUyQiEK4OQmxweWTI/eRx6eCMG6iGYGLFCZIV6gSs3kZ9NGfr7BtDRpG9AL96tL25RfLErLFEWo9iIr0ZHe5eg/PZEJ+1p5D0tOT6tmj5H0jFxVM0dmo1r+8CSVaGQ3WR/fRB5qQUe5C6ABRHssp4ebxMc9aJ4dSbeMm8iZ+iqWd4Ll+P0MCRqhOUalEZQ3PK6m5ETSuN7QbcDaLknS7WMJgVzJqCnJ6AYugohP3k+CqyAp3wu9P6eL/ZpGSmQilJhGPIvH00yWsFU/ZOQOXOepC9AQR9Q6lDhJejhDX05aCqt8/4mQGfSyyEFks5bNJJi9ch2P3+HtCOb+mYEHH7VOlcrfv3kk8z20WJD25bpyazSzMlk8Qhs5225QUOpTaGtUjK+TBtyPGcFSZkmXD61n5KGvD17/H6d+tyvD2bXggL/O0GaV6wfBIrtZ2pi4cBRu7RJW7IETlHaUMqvkOyi5bLZ7ZRszcqTGF9uI5z2MkQAKD8bpdHFBzR8aYtRNLZP0x/XRzZG4cecnLrCMdcxSH5J7sEB1W2a15TIYmSsnZTbPB0GfDGwbBDS65ZuyYW7bY9S+P21EjHv5tWd59ydii+1dRNhOP8bVoksi6+oPIXdbiA3XOtu9fGolAkUShKw6gSQtfQjcjqnJHcWAo59zxPv1qBoaCFaK+7VsAeXfJB4c3KEPKuNHU6ESW7R1nr2SbmUnVW2H9oiNw7VAk6FStVeZIwDiZ78RpnmNzDtE/3jPZsMtOHUMeO/TNnDPXprEESunMKdI0j5eZ1E5LOVCnWGO1IEMIZwJPsyx8lI3BzzlheDafskgBurESVIYQ00hh7VH6sC7CuJ5xn1K9ZZU83hgJMTZbXTBF/OFc2eqWfKxQMftdOmhgPbhg1ptHah9BYh7mQYD4SA8yB0fL3MlLn/h5+IzZR/gmGBjqc/iaXHYJZ1weP6xVKhUL4U6WSVUjE/0Y0CpHlChLSCbz7QeVOHLlnow4mhcubeMBbYIGcEEkN8uPsB3x1AcdK3P3s69Dzm1jYkcaEFUaWIlCycxKx/usFtTaNdGS5TlJtqF02mZCFVNwgibXfgKfEHhTp1wpDqixV2VkGAM=', 'page_age': '19 hours ago', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'tool_use_id': 'srvtoolu_01CvUbkC6o551QbYJAGPGrTk', 'type': 'web_search_tool_result'}, {'citations': None, 'text': "\n\nBased on the search results, here's the current weather in San Diego:\n\n", 'type': 'text'}, {'citations': [{'cited_text': 'May Gray is back in the forecast after record heat this weekend, temperatures are expected to plummet up to 40 degrees by Tuesday!A cooling trend ramp...', 'encrypted_index': 'EpIBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDIhIjvDJ98+7J/rvyhoMt5pjxoZVH/3xOQd/IjBthwJMYO7C9tGA6ppbkJ4lj8fDEBt7vPBHi8MDiMOGCEjWuUo3755EM0RX59UHVgUqFm4OSvSWdCD8DrDMmQ2XB7D9TZzzkQ0YBA==', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'text': 'May Gray has returned to San Diego after a record hot weekend, with temperatures dropping dramatically.', 'type': 'text'}, {'citations': None, 'text': ' \n\n', 'type': 'text'}, {'citations': [{'cited_text': "Tuesday's Highs: Coast: 60-66° Inland: 59-66° Mountains: 48-60° Deserts: 77-80° · Follow ABC 10News Meteorologist Megan Parry on Facebook at Megan Par...", 'encrypted_index': 'EpEBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDOFKycPha94+KGZP7BoMUdJy8hfB12euYPPXIjDnaX5lWQh3d6Bm+jSsM9zisAHhhJW5nHbFx2QZpE25DjgO8FHD8CayJUr12G0XmEUqFZRmtm1wRtz6By6c7lWhneIPZTAu3RgE', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'text': "Today's (May 13, 2025) expected high temperatures in San Diego are:\n- Coast: 60-66°F\n- Inland: 59-66°F\n- Mountains: 48-60°F\n- Deserts: 77-80°F", 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': '... Tuesday will bring a return of May Gray with limited clearing into the afternoon and anything from heavy mist, drizzle to a few light showers. The...', 'encrypted_index': 'EpIBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDNByYnpkmBmrSi02RBoMej9t3/ktxdJBokWfIjBoNKyCKYJvJ2ikhvA0KlT4ujbpjqPzsUe+OBii6fRXSduFO6TB84r5z7oydxlqThIqFgPfyYvKMm4Is6p3WSoVP57ETtaXNKMYBA==', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'text': 'The day is expected to have "May Gray" conditions with limited clearing into the afternoon. There might be heavy mist, drizzle, or a few light showers, with the best chance of light rain having been overnight into this morning.', 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Gusty winds will continue to target the mountains and deserts where a Wind Advisory is in effect until 5am Wednesday. Expect west and southwesterly wi...', 'encrypted_index': 'EpQBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDI/W7IQUFEb50l2hSBoM4LD/qEoIb2kEllDTIjCqD6rLV/0dxmZ7n/5xLtq7PT5nlyBWTbNDl16PMoGOpE2uV6JV6xOOhC/K4lG7Ej8qGKJImwFtamEbi4dmkILRJlQ2iVVZmhOPxxgE', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'text': 'The mountains and desert areas are experiencing gusty winds with a Wind Advisory in effect until 5am Wednesday. West and southwesterly winds of 20 to 35mph with gusts of 40 to 65mph are expected, which may cause blowing dust and dirt leading to poor air quality at times.', 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'The air quality is generally acceptable for most individuals. However, sensitive groups may experience minor to moderate symptoms from long-term expos...', 'encrypted_index': 'EpEBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDPnyEjZ722g8lwxlnBoMYatt0aR6JU3vdZmYIjCDFZzb9xzKbTnlkJ/BQU7OvsLkkIgr95nLkH8CLk8Qz7kXcUtR1MFMzy9Zsyvuo5YqFf5O0YVXcBWcI4gsyYWX5sbIvjF6LBgE', 'title': 'San Diego, CA Weather Forecast | AccuWeather', 'type': 'web_search_result_location', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628'}], 'text': 'The air quality is generally acceptable for most individuals, though sensitive groups may experience minor to moderate symptoms from long-term exposure.', 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Sunny skies and warming temperatures will be the trend early next week. ', 'encrypted_index': 'EpEBCioIAxgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDM79OIyOcKVyqvGhbBoMOeT/EcmnZkeNf7YnIjAAMPqYMiLWU4fjAsqNeqCpnxP5ro+aAOvpbiRyGWloRoJUrez9X4CAUbBXEbREoWEqFbHD7JoGEsqN1RqWtEBZF+FAvdylbxgE', 'title': "San Diego's Weather Forecast for May 12, 2025: Drastic drop in temperatures!", 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures'}], 'text': 'Looking ahead, sunny skies and warming temperatures are expected early next week.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 11871, 'input_tokens': 11, 'output_tokens': 450, 'server_tool_use': {'web_search_requests': 1}}
= []
chat.tools = 'Tell me about otters'
pr = chat(pr)
r r
Otters are fascinating semi-aquatic mammals that belong to the Mustelidae family, which also includes weasels, badgers, and wolverines. Here are some key facts about otters:
Physical characteristics: - They have sleek, streamlined bodies adapted for swimming - Dense, water-repellent fur that keeps them warm and dry - Strong tails for steering and propulsion in water - Webbed feet for efficient swimming - Sharp claws and teeth for catching prey
Types of otters: - Sea otters: Largest species, live primarily in ocean waters - River otters: Found in freshwater habitats worldwide - Giant otters: Native to South America, can grow up to 6 feet long
Behavior: - Highly intelligent and playful animals - Known for using tools (sea otters use rocks to crack open shellfish) - Social creatures that often live in family groups - Excellent swimmers and divers - Most species are active during the day
Diet: - Primarily carnivorous, eating fish, crustaceans, and other aquatic animals - Sea otters particularly enjoy shellfish like clams, mussels, and sea urchins - Some river otters also eat small mammals, birds, and amphibians
Conservation: - Many otter species are threatened or endangered due to habitat loss, pollution, and hunting - Sea otters are considered a keystone species, helping maintain healthy kelp forests by controlling sea urchin populations
Otters are beloved for their playful behavior and charming appearance, making them popular attractions at zoos and aquariums worldwide.
- id:
msg_015ogBe6MsHWGK4J4aKybqq8
- content:
[{'citations': None, 'text': 'Otters are fascinating semi-aquatic mammals that belong to the Mustelidae family, which also includes weasels, badgers, and wolverines. Here are some key facts about otters:\n\nPhysical characteristics:\n- They have sleek, streamlined bodies adapted for swimming\n- Dense, water-repellent fur that keeps them warm and dry\n- Strong tails for steering and propulsion in water\n- Webbed feet for efficient swimming\n- Sharp claws and teeth for catching prey\n\nTypes of otters:\n- Sea otters: Largest species, live primarily in ocean waters\n- River otters: Found in freshwater habitats worldwide\n- Giant otters: Native to South America, can grow up to 6 feet long\n\nBehavior:\n- Highly intelligent and playful animals\n- Known for using tools (sea otters use rocks to crack open shellfish)\n- Social creatures that often live in family groups\n- Excellent swimmers and divers\n- Most species are active during the day\n\nDiet:\n- Primarily carnivorous, eating fish, crustaceans, and other aquatic animals\n- Sea otters particularly enjoy shellfish like clams, mussels, and sea urchins\n- Some river otters also eat small mammals, birds, and amphibians\n\nConservation:\n- Many otter species are threatened or endangered due to habitat loss, pollution, and hunting\n- Sea otters are considered a keystone species, helping maintain healthy kelp forests by controlling sea urchin populations\n\nOtters are beloved for their playful behavior and charming appearance, making them popular attractions at zoos and aquariums worldwide.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 6893, 'cache_read_input_tokens': 0, 'input_tokens': 4, 'output_tokens': 361, 'server_tool_use': None}
= chat("Write a sentence about Python!", maxthinktok=1024)
r r
Python is a versatile, high-level programming language known for its readable syntax and wide application in data science, web development, artificial intelligence, and automation tasks.
Thinking
The request is asking me to write a sentence about Python. This could refer to either the programming language or the snake, but the programming language is a very common topic. I’ll write a concise, informative sentence about Python programming language.- id:
msg_013Ugd4ytsFTF7LSeQJhrWrR
- content:
[{'signature': 'ErUBCkYIAxgCIkCabtM7zCu0RsMYFcuqwo9yx10T94X7+RSJo18/HNDS2s555y32OOX1r/A506DCxkFKsGdIRKaHQanErB3CYYxjEgyb/47sG4qcN1lTUjsaDFJ9IFmtHpojLufZ0CIwePt28AtXETLArmIOk4DdxaCDu3N91i4cMBso6yfHBC2Si8dIU5LNvDG6R7GXjKnKKh3PdpOQQSWsoECUCs9aVhcN9jc2DgqAQ5xRPlQWAxgC', 'thinking': "The request is asking me to write a sentence about Python. This could refer to either the programming language or the snake, but the programming language is a very common topic. I'll write a concise, informative sentence about Python programming language.", 'type': 'thinking'}, {'citations': None, 'text': 'Python is a versatile, high-level programming language known for its readable syntax and wide application in data science, web development, artificial intelligence, and automation tasks.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 7286, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 94, 'server_tool_use': None}
= mk_ns(sums)
chat.ns = [sums]
chat.tools = 'What is 100 + 100?'
pr
chat(pr)= chat()
r r
The sum of 100 + 100 is 200.
- id:
msg_01BK1XCyfJCaZYQMgHwL7MVe
- content:
[{'citations': None, 'text': 'The sum of 100 + 100 is 200.', 'type': 'text'}]
- model:
claude-3-7-sonnet-20250219
- role:
assistant
- stop_reason:
end_turn
- stop_sequence:
None
- type:
message
- usage:
{'cache_creation_input_tokens': 96, 'cache_read_input_tokens': 7696, 'input_tokens': 6, 'output_tokens': 18, 'server_tool_use': None}
chat
The sum of 100 + 100 is 200.
History
user: What is the weather in San Diego?
assistant: I’ll check the current weather in San Diego for you.
Based on the search results, here’s the current weather in San Diego:
May Gray has returned to San Diego after a record hot weekend, with temperatures dropping dramatically. 7
Today’s (May 13, 2025) expected high temperatures in San Diego are: - Coast: 60-66°F - Inland: 59-66°F - Mountains: 48-60°F - Deserts: 77-80°F 8
The day is expected to have “May Gray” conditions with limited clearing into the afternoon. There might be heavy mist, drizzle, or a few light showers, with the best chance of light rain having been overnight into this morning. 9
The mountains and desert areas are experiencing gusty winds with a Wind Advisory in effect until 5am Wednesday. West and southwesterly winds of 20 to 35mph with gusts of 40 to 65mph are expected, which may cause blowing dust and dirt leading to poor air quality at times. 10
The air quality is generally acceptable for most individuals, though sensitive groups may experience minor to moderate symptoms from long-term exposure. 11
Looking ahead, sunny skies and warming temperatures are expected early next week. 12
user: Tell me about otters
assistant: Otters are fascinating semi-aquatic mammals that belong to the Mustelidae family, which also includes weasels, badgers, and wolverines. Here are some key facts about otters:
Physical characteristics: - They have sleek, streamlined bodies adapted for swimming - Dense, water-repellent fur that keeps them warm and dry - Strong tails for steering and propulsion in water - Webbed feet for efficient swimming - Sharp claws and teeth for catching prey
Types of otters: - Sea otters: Largest species, live primarily in ocean waters - River otters: Found in freshwater habitats worldwide - Giant otters: Native to South America, can grow up to 6 feet long
Behavior: - Highly intelligent and playful animals - Known for using tools (sea otters use rocks to crack open shellfish) - Social creatures that often live in family groups - Excellent swimmers and divers - Most species are active during the day
Diet: - Primarily carnivorous, eating fish, crustaceans, and other aquatic animals - Sea otters particularly enjoy shellfish like clams, mussels, and sea urchins - Some river otters also eat small mammals, birds, and amphibians
Conservation: - Many otter species are threatened or endangered due to habitat loss, pollution, and hunting - Sea otters are considered a keystone species, helping maintain healthy kelp forests by controlling sea urchin populations
Otters are beloved for their playful behavior and charming appearance, making them popular attractions at zoos and aquariums worldwide.
user: Write a sentence about Python!
assistant: Python is a versatile, high-level programming language known for its readable syntax and wide application in data science, web development, artificial intelligence, and automation tasks.
user: What is 100 + 100?
assistant: I’ll calculate the sum of 100 + 100 for you.
user: 200
assistant: The sum of 100 + 100 is 200.Metric | Count | Cost (USD) |
---|---|---|
Input tokens | 35 | 0.000105 |
Output tokens | 1,009 | 0.015135 |
Cache tokens | 41,538 | 0.088261 |
Server tool use | 1 | 0.010000 |
Total | 42,582 | $0.113501 |
chat.cost, chat.use
(0.11350135,
In: 35; Out: 1009; Cache create: 21971; Cache read: 19567; Total Tokens: 42582; Server tool use (web search requests): 1)
Third party providers
Amazon Bedrock
These are Amazon’s current Claude models:
models_aws
['claude-3-5-haiku-20241022',
'claude-3-7-sonnet-20250219',
'anthropic.claude-3-opus-20240229-v1:0',
'anthropic.claude-3-5-sonnet-20241022-v2:0']
anthropic
at version 0.34.2 seems not to install boto3
as a dependency. You may need to do a pip install boto3
or the creation of the Client
below fails.
Provided boto3
is installed, we otherwise don’t need any extra code to support Amazon Bedrock – we just have to set up the approach client:
= AnthropicBedrock(
ab =os.environ['AWS_ACCESS_KEY'],
aws_access_key=os.environ['AWS_SECRET_KEY'],
aws_secret_key
)= Client(models_aws[-1], ab) client
= Chat(cli=client) chat
"I'm Jeremy") chat(
Google Vertex
models_goog
from anthropic import AnthropicVertex
import google.auth
= google.auth.default()[1]
project_id = "us-east5"
region = AnthropicVertex(project_id=project_id, region=region)
gv = Client(models_goog[-1], gv) client
= Chat(cli=client) chat
"I'm Jeremy") chat(
Footnotes
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “May Gray is back in the forecast after record heat this weekend, temperatures are expected to plummet up to 40 degrees by Tuesday!A cooling trend ramp…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Tuesday’s Highs: Coast: 60-66° Inland: 59-66° Mountains: 48-60° Deserts: 77-80° · Follow ABC 10News Meteorologist Megan Parry on Facebook at Megan Par…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “… Tuesday will bring a return of May Gray with limited clearing into the afternoon and anything from heavy mist, drizzle to a few light showers. The…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Gusty winds will continue to target the mountains and deserts where a Wind Advisory is in effect until 5am Wednesday. Expect west and southwesterly wi…”↩︎
https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628 “The air quality is generally acceptable for most individuals. However, sensitive groups may experience minor to moderate symptoms from long-term expos…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Sunny skies and warming temperatures will be the trend early next week.”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “May Gray is back in the forecast after record heat this weekend, temperatures are expected to plummet up to 40 degrees by Tuesday!A cooling trend ramp…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Tuesday’s Highs: Coast: 60-66° Inland: 59-66° Mountains: 48-60° Deserts: 77-80° · Follow ABC 10News Meteorologist Megan Parry on Facebook at Megan Par…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “… Tuesday will bring a return of May Gray with limited clearing into the afternoon and anything from heavy mist, drizzle to a few light showers. The…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Gusty winds will continue to target the mountains and deserts where a Wind Advisory is in effect until 5am Wednesday. Expect west and southwesterly wi…”↩︎
https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628 “The air quality is generally acceptable for most individuals. However, sensitive groups may experience minor to moderate symptoms from long-term expos…”↩︎
https://www.10news.com/weather/san-diegos-weather-forecast-for-may-12-2025-drastic-drop-in-temperatures “Sunny skies and warming temperatures will be the trend early next week.”↩︎