import os
# os.environ['ANTHROPIC_LOG'] = 'debug'Claudette’s source
This is the ‘literate’ source code for Claudette. You can view the fully rendered version of the notebook here, or you can clone the git repo and run the interactive notebook in Jupyter. The notebook is converted the Python module claudette/core.py using nbdev. The goal of this source code is to both create the Python module, and also to teach the reader how it is created, without assuming much existing knowledge about Claude’s API.
Most of the time you’ll see that we write some source code first, and then a description or discussion of it afterwards.
Setup
To print every HTTP request and response in full, uncomment the above line. This functionality is provided by Anthropic’s SDK.
from anthropic.types import Model
from claudette.text_editor import *
from typing import get_args
from datetime import datetime
from pprint import pprint
from IPython.display import Image
from cachy import enable_cachy
import warningsenable_cachy()warnings.filterwarnings("ignore", message="Pydantic serializer warnings")If you’re reading the rendered version of this notebook, you’ll see an “Exported source” collapsible widget below. If you’re reading the source notebook directly, you’ll see #| exports at the top of the cell. These show that this piece of code will be exported into the python module that this notebook creates. No other code will be included – any other code in this notebook is just for demonstration, documentation, and testing.
You can toggle expanding/collapsing the source code of all exported sections by using the </> Code menu in the top right of the rendered notebook page.
Exported source
model_types = {
# Anthropic
'claude-opus-4-5': 'opus',
'claude-sonnet-4-5': 'sonnet',
'claude-haiku-4-5': 'haiku',
'claude-opus-4-1-20250805': 'opus-4-1',
'claude-opus-4-20250514': 'opus-4',
'claude-3-opus-20240229': 'opus-3',
'claude-sonnet-4-20250514': 'sonnet-4',
'claude-3-7-sonnet-20250219': 'sonnet-3-7',
'claude-3-5-sonnet-20241022': 'sonnet-3-5',
'claude-3-haiku-20240307': 'haiku-3',
'claude-3-5-haiku-20241022': 'haiku-3-5',
# AWS
'anthropic.claude-opus-4-1-20250805-v1:0': 'opus',
'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet',
'anthropic.claude-3-opus-20240229-v1:0': 'opus-3',
'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet',
'anthropic.claude-3-haiku-20240307-v1:0': 'haiku',
# Google
'claude-opus-4-1@20250805': 'opus',
'claude-3-5-sonnet-v2@20241022': 'sonnet',
'claude-3-opus@20240229': 'opus-3',
'claude-3-sonnet@20240229': 'sonnet',
'claude-3-haiku@20240307': 'haiku',
}
all_models = list(model_types)models['claude-opus-4-5',
'claude-sonnet-4-5',
'claude-haiku-4-5',
'claude-opus-4-1-20250805',
'claude-opus-4-20250514',
'claude-3-opus-20240229',
'claude-sonnet-4-20250514',
'claude-3-7-sonnet-20250219',
'claude-3-5-sonnet-20241022',
'claude-3-haiku-20240307']
Exported source
text_only_models = ('claude-3-5-haiku-20241022',)Exported source
has_streaming_models = set(all_models)
has_system_prompt_models = set(all_models)
has_temperature_models = set(all_models)
has_extended_thinking_models = {
'claude-opus-4-5', 'claude-opus-4-1-20250805', 'claude-opus-4-20250514',
'claude-sonnet-4-20250514', 'claude-3-7-sonnet-20250219', 'sonnet-4-5',
'haiku-4-5'
}has_extended_thinking_models{'claude-3-7-sonnet-20250219',
'claude-opus-4-1-20250805',
'claude-opus-4-20250514',
'claude-opus-4-5',
'claude-sonnet-4-20250514',
'haiku-4-5',
'sonnet-4-5'}
can_use_extended_thinking
can_use_extended_thinking (m)
Exported source
def can_stream(m): return m in has_streaming_models
def can_set_system_prompt(m): return m in has_system_prompt_models
def can_set_temperature(m): return m in has_temperature_models
def can_use_extended_thinking(m): return m in has_extended_thinking_modelscan_set_temperature
can_set_temperature (m)
can_set_system_prompt
can_set_system_prompt (m)
can_stream
can_stream (m)
We include these functions to provide a uniform library interface with cosette since openai models such as o1 do not have many of these capabilities.
assert can_stream('claude-3-5-sonnet-20241022') and can_set_system_prompt('claude-3-5-sonnet-20241022') and can_set_temperature('claude-3-5-sonnet-20241022')These are the current versions and prices of Anthropic’s models at the time of writing.
model = models[0]
model'claude-opus-4-5'
For examples, we’ll use the latest Opus, since it’s awesome.
Antropic SDK
cli = Anthropic()This is what Anthropic’s SDK provides for interacting with Python. To use it, pass it a list of messages, with content and a role. The roles should alternate between user and assistant.
After the code below you’ll see an indented section with an orange vertical line on the left. This is used to show the result of running the code above. Because the code is running in a Jupyter Notebook, we don’t have to use print to display results, we can just type the expression directly, as we do with r here.
m = {'role': 'user', 'content': "I'm Jeremy"}
r = cli.messages.create(messages=[m], model=model, max_tokens=100)
rHi Jeremy, nice to meet you! How can I help you today?
- id:
msg_019NCk6wKu7iiNLrFhG2pCnV - content:
[{'citations': None, 'text': 'Hi Jeremy, nice to meet you! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}
Formatting output
That output is pretty long and hard to read, so let’s clean it up. We’ll start by pulling out the Content part of the message. To do that, we’re going to write our first function which will be included to the claudette/core.py module.
This is the first exported public function or class we’re creating (the previous export was of a variable). In the rendered version of the notebook for these you’ll see 4 things, in this order (unless the symbol starts with a single _, which indicates it’s private):
- The signature (with the symbol name as a heading, with a horizontal rule above)
- A table of paramater docs (if provided)
- The doc string (in italics).
- The source code (in a collapsible “Exported source” block)
After that, we generally provide a bit more detail on what we’ve created, and why, along with a sample usage.
find_block
find_block (r:collections.abc.Mapping, blk_type:type|str=<class 'anthropic.types.text_block.TextBlock'>)
Find the first block of type blk_type in r.content.
| Type | Default | Details | |
|---|---|---|---|
| r | Mapping | The message to look in | |
| blk_type | type | str | TextBlock | The type of block to find |
Exported source
def _type(x):
try: return x.type
except AttributeError: return x.get('type')
def find_block(r:abc.Mapping, # The message to look in
blk_type:type|str=TextBlock # The type of block to find
):
"Find the first block of type `blk_type` in `r.content`."
f = (lambda x:_type(x)==blk_type) if isinstance(blk_type,str) else (lambda x:isinstance(x,blk_type))
return first(o for o in r.content if f(o))This makes it easier to grab the needed parts of Claude’s responses, which can include multiple pieces of content. By default, we look for the first text block. That will generally have the content we want to display.
find_block(r)TextBlock(citations=None, text='Hi Jeremy, nice to meet you! How can I help you today?', type='text')
def contents(r):
"Helper to get the contents from Claude response `r`."
blk = find_block(r)
if not blk and r.content: blk = r.content[0]
if hasattr(blk,'text'): return blk.text.strip()
elif hasattr(blk,'content'): return blk.content.strip()
elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
return str(blk)For display purposes, we often just want to show the text itself.
contents(r)'Hi Jeremy, nice to meet you! How can I help you today?'
Exported source
@patch
def _repr_markdown_(self:(Message)):
det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items())
cts = re.sub(r'\$', '$', contents(self)) # escape `$` for jupyter latex
return f"""{cts}
<details>
- {det}
</details>"""Jupyter looks for a _repr_markdown_ method in displayed objects; we add this in order to display just the content text, and collapse full details into a hideable section. Note that patch is from fastcore, and is used to add (or replace) functionality in an existing class. We pass the class(es) that we want to patch as type annotations to self. In this case, _repr_markdown_ is being added to Anthropic’s Message class, so when we display the message now we just see the contents, and the details are hidden away in a collapsible details block.
rHi Jeremy, nice to meet you! How can I help you today?
- id:
msg_019NCk6wKu7iiNLrFhG2pCnV - content:
[{'citations': None, 'text': 'Hi Jeremy, nice to meet you! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}
One key part of the response is the usage key, which tells us how many tokens we used by returning a Usage object.
We’ll add some helpers to make things a bit cleaner for creating and formatting these objects.
r.usageIn: 10; Out: 18; Cache create: 0; Cache read: 0; Total Tokens: 28; Search: 0
server_tool_usage
server_tool_usage (web_search_requests=0)
Little helper to create a server tool usage object
Exported source
def server_tool_usage(web_search_requests=0):
'Little helper to create a server tool usage object'
return ServerToolUsage(web_search_requests=web_search_requests)usage
usage (inp=0, out=0, cache_create=0, cache_read=0, server_tool_use=ServerToolUsage(web_search_requests=0))
Slightly more concise version of Usage.
| Type | Default | Details | |
|---|---|---|---|
| inp | int | 0 | input tokens |
| out | int | 0 | Output tokens |
| cache_create | int | 0 | Cache creation tokens |
| cache_read | int | 0 | Cache read tokens |
| server_tool_use | ServerToolUsage | ServerToolUsage(web_search_requests=0) | server tool use |
Exported source
def usage(inp=0, # input tokens
out=0, # Output tokens
cache_create=0, # Cache creation tokens
cache_read=0, # Cache read tokens
server_tool_use=server_tool_usage() # server tool use
):
'Slightly more concise version of `Usage`.'
return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create,
cache_read_input_tokens=cache_read, server_tool_use=server_tool_use)The constructor provided by Anthropic is rather verbose, so we clean it up a bit, using a lowercase version of the name.
usage(5)In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Search: 0
Usage.total
Usage.total ()
Exported source
def _dgetattr(o,s,d):
"Like getattr, but returns the default if the result is None"
return getattr(o,s,d) or d
@patch(as_prop=True)
def total(self:Usage): return self.input_tokens+self.output_tokens+_dgetattr(self, "cache_creation_input_tokens",0)+_dgetattr(self, "cache_read_input_tokens",0)Adding a total property to Usage makes it easier to see how many tokens we’ve used up altogether.
usage(5,1).total6
Usage.__repr__
Usage.__repr__ ()
Return repr(self).
Exported source
@patch
def __repr__(self:Usage):
io_toks = f'In: {self.input_tokens}; Out: {self.output_tokens}'
cache_toks = f'Cache create: {_dgetattr(self, "cache_creation_input_tokens",0)}; Cache read: {_dgetattr(self, "cache_read_input_tokens",0)}'
server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
server_tool_use_str = f'Search: {server_tool_use.web_search_requests}'
total_tok = f'Total Tokens: {self.total}'
return f'{io_toks}; {cache_toks}; {total_tok}; {server_tool_use_str}'In python, patching __repr__ lets us change how an object is displayed. (More generally, methods starting and ending in __ in Python are called dunder methods, and have some magic behavior – such as, in this case, changing how an object is displayed.) We won’t be directly displaying ServerToolUsage’s, so we can handle its display behavior in the same Usage __repr__
usage(5)In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Search: 0
ServerToolUsage.__add__
ServerToolUsage.__add__ (b)
Add together each of the server tool use counts
Exported source
@patch
def __add__(self:ServerToolUsage, b):
"Add together each of the server tool use counts"
return ServerToolUsage(web_search_requests=self.web_search_requests+b.web_search_requests)And, patching __add__ lets + work on a ServerToolUsage as well as a Usage object.
server_tool_usage(1) + server_tool_usage(2)ServerToolUsage(web_search_requests=3)
Usage.__add__
Usage.__add__ (b)
Add together each of input_tokens and output_tokens
Exported source
@patch
def __add__(self:Usage, b):
"Add together each of `input_tokens` and `output_tokens`"
return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens,
_dgetattr(self,'cache_creation_input_tokens',0)+_dgetattr(b,'cache_creation_input_tokens',0),
_dgetattr(self,'cache_read_input_tokens',0)+_dgetattr(b,'cache_read_input_tokens',0),
_dgetattr(self,'server_tool_use',server_tool_usage())+_dgetattr(b,'server_tool_use',server_tool_usage()))r.usage+r.usage + usage(server_tool_use=server_tool_usage(1))In: 20; Out: 36; Cache create: 0; Cache read: 0; Total Tokens: 56; Search: 1
Creating messages
Creating correctly formatted dicts from scratch every time isn’t very handy, so we’ll import a couple of helper functions from the msglm library.
Let’s use mk_msg to recreate our msg {'role': 'user', 'content': "I'm Jeremy"} from earlier.
prompt = "I'm Jeremy"
m = mk_msg(prompt)
r = cli.messages.create(messages=[m], model=model, max_tokens=100)
rHi Jeremy, nice to meet you! How can I help you today?
- id:
msg_019NCk6wKu7iiNLrFhG2pCnV - content:
[{'citations': None, 'text': 'Hi Jeremy, nice to meet you! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}
We can pass more than just text messages to Claude. As we’ll see later we can also pass images, SDK objects, etc. To handle these different data types we need to pass the type along with our content to Claude.
Here’s an example of a multimodal message containing text and images.
{
'role': 'user',
'content': [
{'type':'text', 'text':'What is in the image?'},
{
'type':'image',
'source': {
'type':'base64', 'media_type':'media_type', 'data': 'data'
}
}
]
}mk_msg infers the type automatically and creates the appropriate data structure.
LLMs, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With Claude, they always alternate user and assistant. We’ll use mk_msgs from msglm to make it easier to build up these dialog lists.
msgs = mk_msgs([prompt, r, "I forgot my name. Can you remind me please?"])
msgs[{'role': 'user', 'content': "I'm Jeremy"},
{'role': 'assistant',
'content': [TextBlock(citations=None, text='Hi Jeremy, nice to meet you! How can I help you today?', type='text')]},
{'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
cli.messages.create(messages=msgs, model=model, max_tokens=200)Based on our conversation, you told me your name is Jeremy.
- id:
msg_01ANFNYSDButUqxmyEqxv66L - content:
[{'citations': None, 'text': 'Based on our conversation, you told me your name is Jeremy.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 42, 'output_tokens': 16, 'server_tool_use': None, 'service_tier': 'standard'}
Client
Client
Client (model, cli=None, log=False, cache=False)
Basic Anthropic messages client.
Exported source
class Client:
def __init__(self, model, cli=None, log=False, cache=False):
"Basic Anthropic messages client."
self.model,self.use = model,usage()
self.text_only = model in text_only_models
self.log = [] if log else None
self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
self.cache = cacheWe’ll create a simple Client for Anthropic which tracks usage stores the model to use. We don’t add any methods right away – instead we’ll use patch for that so we can add and document them incrementally.
c = Client(model)
c.useIn: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Search: 0
Exported source
@patch
def _r(self:Client, r:Message, prefill=''):
"Store the result of the message and accrue total usage."
if prefill:
blk = find_block(r)
if blk: blk.text = prefill + (blk.text or '')
self.result = r
self.use += r.usage
self.stop_reason = r.stop_reason
self.stop_sequence = r.stop_sequence
return rWe use a _ prefix on private methods, but we document them here in the interests of literate source code.
_r will be used each time we get a new result, to track usage and also to keep the result available for later.
c._r(r)
c.useIn: 10; Out: 18; Cache create: 0; Cache read: 0; Total Tokens: 28; Search: 0
Whereas OpenAI’s models use a stream parameter for streaming, Anthropic’s use a separate method. We implement Anthropic’s approach in a private method, and then use a stream parameter in __call__ for consistency:
Exported source
@patch
def _log(self:Client, final, prefill, msgs, **kwargs):
self._r(final, prefill)
if self.log is not None: self.log.append({
"msgs": msgs, **kwargs,
"result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence
})
return self.resultOnce streaming is complete, we need to store the final message and call any completion callback that’s needed.
get_types
get_types (msgs)
Exported source
@save_iter
def _stream(o, cm, prefill, cb):
with cm as s:
yield prefill
yield from s.text_stream
o.value = s.get_final_message()
cb(o.value)get_types(msgs)['text', 'text', 'text']
mk_tool_choice
mk_tool_choice (choose:Union[str,bool,NoneType])
Create a tool_choice dict that’s ‘auto’ if choose is None, ‘any’ if it is True, or ‘tool’ otherwise
print(mk_tool_choice('sums'))
print(mk_tool_choice(True))
print(mk_tool_choice(None)){'type': 'tool', 'name': 'sums'}
{'type': 'any'}
{'type': 'auto'}
Claude can be forced to use a particular tool, or select from a specific list of tools, or decide for itself when to use a tool. If you want to force a tool (or force choosing from a list), include a tool_choice param with a dict from mk_tool_choice.
Claude supports adding an extra assistant message at the end, which contains the prefill – i.e. the text we want Claude to assume the response starts with. However Claude doesn’t actually repeat that in the response, so for convenience we add it.
Client.__call__
Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, maxthinktok=0, prefill='', stream:bool=False, stop=None, tools:Optional[list]=None, tool_choice:Optional[dict]=None, cb=None, metadata:MetadataParam|Omit=<anthropic.Omit object at 0x7f3368b0e320>, service_tier:"Literal['auto','standard_ only']|Omit"=<anthropic.Omit object at 0x7f3368b0e320>, stop_sequences:SequenceNotStr[str]|Omit=<anthropic.Omit object at 0x7f3368b0e320>, system:Union[str,Iterable[Tex tBlockParam]]|Omit=<anthropic.Omit object at 0x7f3368b0e320>, temperature:float|Omit=<anthropic.Omit object at 0x7f3368b0e320>, thinking:ThinkingConfigParam|Omit=<anthropic.Omit object at 0x7f3368b0e320>, top_k:int|Omit=<anthropic.Omit object at 0x7f3368b0e320>, top_p:float|Omit=<anthropic.Omit object at 0x7f3368b0e320>, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Make a call to Claude.
| Type | Default | Details | |
|---|---|---|---|
| msgs | list | List of messages in the dialog | |
| sp | str | The system prompt | |
| temp | int | 0 | Temperature |
| maxtok | int | 4096 | Maximum tokens |
| maxthinktok | int | 0 | Maximum thinking tokens |
| prefill | str | Optional prefill to pass to Claude as start of its response | |
| stream | bool | False | Stream response? |
| stop | NoneType | None | Stop sequence |
| tools | Optional | None | List of tools to make available to Claude |
| tool_choice | Optional | None | Optionally force use of some tool |
| cb | NoneType | None | Callback to pass result to when complete |
| metadata | MetadataParam | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| service_tier | Literal[‘auto’, ‘standard_only’] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| stop_sequences | SequenceNotStr[str] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| system | Union[str, Iterable[TextBlockParam]] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| temperature | float | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| thinking | ThinkingConfigParam | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| top_k | int | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| top_p | float | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| extra_headers | Optional | None | Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs. The extra values given here take precedence over values defined on the client or passed to this method. |
| extra_query | Query | None | None | |
| extra_body | Body | None | None | |
| timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
Exported source
@patch
def _precall(self:Client, msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
stop, tools, tool_choice, kwargs):
if tools: kwargs['tools'] = [get_schema(o) if callable(o) else o for o in listify(tools)]
if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
if maxthinktok:
kwargs['thinking'] = {'type':'enabled', 'budget_tokens':maxthinktok}
temp,prefill = 1,''
pref = [prefill.strip()] if prefill else []
if not isinstance(msgs,list): msgs = [msgs]
if stop is not None:
if not isinstance(stop, (list)): stop = [stop]
kwargs["stop_sequences"] = stop
msgs = mk_msgs(msgs+pref, cache=self.cache, cache_last_ckpt_only=self.cache)
assert not ('image' in get_types(msgs) and self.text_only), f"Images not supported by: {self.model}"
kwargs |= dict(max_tokens=maxtok, system=sp, temperature=temp)
return msgs, kwargsExported source
@patch
@delegates(messages.Messages.create)
def __call__(self:Client,
msgs:list, # List of messages in the dialog
sp='', # The system prompt
temp=0, # Temperature
maxtok=4096, # Maximum tokens
maxthinktok=0, # Maximum thinking tokens
prefill='', # Optional prefill to pass to Claude as start of its response
stream:bool=False, # Stream response?
stop=None, # Stop sequence
tools:Optional[list]=None, # List of tools to make available to Claude
tool_choice:Optional[dict]=None, # Optionally force use of some tool
cb=None, # Callback to pass result to when complete
**kwargs):
"Make a call to Claude."
msgs,kwargs = self._precall(msgs, prefill, sp, temp, maxtok, maxthinktok, stream,
stop, tools, tool_choice, kwargs)
m = self.c.messages
f = m.stream if stream else m.create
res = f(model=self.model, messages=msgs, **kwargs)
def _cb(v):
self._log(v, prefill=prefill, msgs=msgs, **kwargs)
if cb: cb(v)
if stream: return _stream(res, prefill, _cb)
try: return res
finally: _cb(res)Defining __call__ let’s us use an object like a function (i.e it’s callable). We use it as a small wrapper over messages.create.
c = Client(model, log=True)
c.useIn: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Search: 0
c('Hi')Hi there! How can I help you today?
- id:
msg_018BZG2c5BM4H1Yzv57EmpT1 - content:
[{'citations': None, 'text': 'Hi there! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 8, 'output_tokens': 13, 'server_tool_use': None, 'service_tier': 'standard'}
Usage details are automatically updated after each call:
c.useIn: 8; Out: 13; Cache create: 0; Cache read: 0; Total Tokens: 21; Search: 0
A log of all messages is kept if log=True is passed:
pprint(c.log)[{'max_tokens': 4096,
'msgs': [{'content': 'Hi', 'role': 'user'}],
'result': Message(id='msg_018BZG2c5BM4H1Yzv57EmpT1', content=[TextBlock(citations=None, text='Hi there! How can I help you today?', type='text')], model='claude-opus-4-5-20251101', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 8; Out: 13; Cache create: 0; Cache read: 0; Total Tokens: 21; Search: 0),
'stop_reason': 'end_turn',
'stop_sequence': None,
'system': '',
'temperature': 0,
'use': In: 8; Out: 13; Cache create: 0; Cache read: 0; Total Tokens: 21; Search: 0}]
Let’s try out prefill:
q = "Very concisely, what is the meaning of life?"
pref = 'According to Douglas Adams, 'c(q, prefill=pref)According to Douglas Adams,
42
More seriously, there’s no consensus. Common answers include:
- Create your own (existentialism)
- Happiness/flourishing (Aristotle)
- Connection and love
- Service to others
- Religious purpose (fulfill God’s will, liberation, etc.)
- There isn’t one (nihilism)—though many find freedom in that
The question might matter more than any single answer.
- id:
msg_01Mncqgtt97xTmptGFZWQZ8f - content:
[{'citations': None, 'text': "According to Douglas Adams, \n\n**42**\n\nMore seriously, there's no consensus. Common answers include:\n\n- **Create your own** (existentialism)\n- **Happiness/flourishing** (Aristotle)\n- **Connection and love**\n- **Service to others**\n- **Religious purpose** (fulfill God's will, liberation, etc.)\n- **There isn't one** (nihilism)—though many find freedom in that\n\nThe question might matter more than any single answer.", 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 24, 'output_tokens': 107, 'server_tool_use': None, 'service_tier': 'standard'}
c.useIn: 32; Out: 120; Cache create: 0; Cache read: 0; Total Tokens: 152; Search: 0
We can pass stream=True to stream the response back incrementally:
r = c('Hi', stream=True)
for o in r: print(o, end='')Hi there! How can I help you today?
c.useIn: 40; Out: 133; Cache create: 0; Cache read: 0; Total Tokens: 173; Search: 0
The full final message after completion of streaming is in the value attr of the response:
r.valueHi there! How can I help you today?
- id:
msg_01AEd2TrDsoRaSeMQHWxmBfX - content:
[{'citations': None, 'text': 'Hi there! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 8, 'output_tokens': 13, 'server_tool_use': None, 'service_tier': 'standard'}
for o in c(q, prefill=pref, stream=True): print(o, end='')According to Douglas Adams,
**42**
More seriously, there's no consensus. Common answers include:
- **Create your own** (existentialism)
- **Happiness/flourishing** (Aristotle)
- **Connection and love**
- **Service to others**
- **Religious purpose** (fulfill God's will, liberation, etc.)
- **There isn't one** (nihilism)—though many find freedom in that
The question itself may matter more than any single answer.
c.useIn: 64; Out: 241; Cache create: 0; Cache read: 0; Total Tokens: 305; Search: 0
Pass a stop sequence if you want claude to stop generating text when it encounters it.
c("Count from 1 to 10", stop="5")1, 2, 3, 4,
- id:
msg_011X9tcGnT8VifWTcYk3CroL - content:
[{'citations': None, 'text': '1, 2, 3, 4, ', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
stop_sequence - stop_sequence:
5 - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 15, 'output_tokens': 14, 'server_tool_use': None, 'service_tier': 'standard'}
This also works with streaming, and you can pass more than one stop sequence:
for o in c("Count from 1 to 10", stop=["3", "yellow"], stream=True): print(o, end='')
print()
print(c.stop_reason, c.stop_sequence)1, 2,
stop_sequence 3
We’ve shown the token usage but we really care about is pricing. Let’s extract the latest pricing from Anthropic into a pricing dict.
get_pricing
get_pricing (m, u)
Exported source
def get_pricing(m, u):
return pricing[m][:3] if u.prompt_token_count < 128_000 else pricing[m][3:]Similarly, let’s get the pricing for the latest server tools:
We’ll patch Usage to enable it compute the cost given pricing.
Usage.cost
Usage.cost (costs:tuple)
Exported source
@patch
def cost(self:Usage, costs:tuple) -> float:
cache_w, cache_r = _dgetattr(self, "cache_creation_input_tokens",0), _dgetattr(self, "cache_read_input_tokens",0)
tok_cost = sum([self.input_tokens * costs[0] + self.output_tokens * costs[1] + cache_w * costs[2] + cache_r * costs[3]]) / 1e6
server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
return tok_cost + server_tool_costClient.cost
Client.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Client) -> float: return self.use.cost(pricing[model_types[self.model]])get_costs
get_costs (c)
Exported source
def get_costs(c):
costs = pricing[model_types[c.model]]
inp_cost = c.use.input_tokens * costs[0] / 1e6
out_cost = c.use.output_tokens * costs[1] / 1e6
cache_w = c.use.cache_creation_input_tokens
cache_r = c.use.cache_read_input_tokens
cache_cost = (cache_w * costs[2] + cache_r * costs[3]) / 1e6
server_tool_use = c.use.server_tool_use
server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
return inp_cost, out_cost, cache_cost, cache_w + cache_r, server_tool_costThe markdown repr of the client itself will show the latest result, along with the usage so far.
Exported source
@patch
def _repr_markdown_(self:Client):
if not hasattr(self,'result'): return 'No results yet'
msg = contents(self.result)
inp_cost, out_cost, cache_cost, cached_toks, server_tool_cost = get_costs(self)
return f"""{msg}
| Metric | Count | Cost (USD) |
|--------|------:|-----:|
| Input tokens | {self.use.input_tokens:,} | {inp_cost:.6f} |
| Output tokens | {self.use.output_tokens:,} | {out_cost:.6f} |
| Cache tokens | {cached_toks:,} | {cache_cost:.6f} |
| Server tool use | {self.use.server_tool_use.web_search_requests:,} | {server_tool_cost:.6f} |
| **Total** | **{self.use.total:,}** | **${self.cost:.6f}** |"""c1, 2,
| Metric | Count | Cost (USD) |
|---|---|---|
| Input tokens | 94 | 0.000470 |
| Output tokens | 263 | 0.006575 |
| Cache tokens | 0 | 0.000000 |
| Server tool use | 0 | 0.000000 |
| Total | 357 | $0.007045 |
Pass a list of alternating user/assistant messages to give Claude a “dialog”.
c(["My name is Jeremy", "Hi Jeremy!", "Can you remind me what my name is?"])Your name is Jeremy, as you just told me.
- id:
msg_015wUJU21XrgVsKYLBJQTF53 - content:
[{'citations': None, 'text': 'Your name is Jeremy, as you just told me.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 29, 'output_tokens': 14, 'server_tool_use': None, 'service_tier': 'standard'}
Tool use
Let’s now look more at tool use (aka function calling).
For testing, we need a function that Claude can call; we’ll write a simple function that adds numbers together, and will tell us when it’s being called:
@dataclass
class MySum: val:int
def sums(
a:int, # First thing to sum
b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
"Adds a + b."
print(f"Finding the sum of {a} and {b}")
return MySum(a + b)a,b = 604542,6458932
pr = f"What is {a}+{b}?"
sp = "Always use tools when calculations are required."Claudette can autogenerate a schema thanks to the toolslm library. We’ll force the use of the tool using the function we created earlier.
tools=[get_schema(sums)]
choice = mk_tool_choice('sums')We’ll start a dialog with Claude now. We’ll store the messages of our dialog in msgs. The first message will be our prompt pr, and we’ll pass our tools schema.
msgs = mk_msgs(pr)
r = c(msgs, sp=sp, tools=tools, tool_choice=choice)
rToolUseBlock(id=‘toolu_013DMEjVcw57u9LFY5yHua5s’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
- id:
msg_01PsL123azR8YVKFf3aBYQb6 - content:
[{'id': 'toolu_013DMEjVcw57u9LFY5yHua5s', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
tool_use - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 713, 'output_tokens': 57, 'server_tool_use': None, 'service_tier': 'standard'}
When Claude decides that it should use a tool, it passes back a ToolUseBlock with the name of the tool to call, and the params to use.
We don’t want to allow it to call just any possible function (that would be a security disaster!) so we create a namespace – that is, a dictionary of allowable function names to call.
ns = mk_ns(sums)
ns{'sums': <function __main__.sums(a: int, b: int = 1) -> int>}
ToolResult is used for two special cases:
When tool calls are RPCs with claudette running on an application server and code execution happening elsewhere, wrapping with a
result_typefield is used as a type descriptor for the claudette client.Different types are handled in message history with specific format, so
mk_funcresbranches the Anthropic representation (see depending on theresult_type.
Currently images are the only supported tool result type - see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#example-of-tool-result-with-images for the format implemented in mk_funcres.
ToolResult
ToolResult (result_type:str, data)
Base class for objects needing a basic __repr__
mk_funcres
mk_funcres (fc, ns)
Given tool use block ‘fc’, get tool result, and create a tool_result response.
We can now use the function requested by Claude. We look it up in ns, and pass in the provided parameters.
fcs = [o for o in r.content if isinstance(o,ToolUseBlock)]
fcs[ToolUseBlock(id='toolu_013DMEjVcw57u9LFY5yHua5s', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]
res = [mk_funcres(fc, ns=ns) for fc in fcs]
resFinding the sum of 604542 and 6458932
[{'type': 'tool_result',
'tool_use_id': 'toolu_013DMEjVcw57u9LFY5yHua5s',
'content': 'MySum(val=7063474)'}]
mk_toolres
mk_toolres (r:collections.abc.Mapping, ns:Optional[collections.abc.Mapping]=None)
Create a tool_result message from response r.
| Type | Default | Details | |
|---|---|---|---|
| r | Mapping | Tool use request response from Claude | |
| ns | Optional | None | Namespace to search for tools |
Exported source
def mk_toolres(
r:abc.Mapping, # Tool use request response from Claude
ns:Optional[abc.Mapping]=None # Namespace to search for tools
):
"Create a `tool_result` message from response `r`."
cts = getattr(r, 'content', [])
res = [mk_msg(r.model_dump(), role='assistant')]
if ns is None: ns=globals()
tcs = [mk_funcres(o, ns) for o in cts if isinstance(o,ToolUseBlock)]
if tcs: res.append(mk_msg(tcs))
return resfoo = []
foo.append({})
foo.append({})
foo[{}, {}]
In order to tell Claude the result of the tool call, we pass back the tool use assistant request and the tool_result response.
tr = mk_toolres(r, ns=ns)
trFinding the sum of 604542 and 6458932
[{'role': 'assistant',
'content': [{'id': 'toolu_013DMEjVcw57u9LFY5yHua5s',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_013DMEjVcw57u9LFY5yHua5s',
'content': 'MySum(val=7063474)'}]}]
msgs[{'role': 'user', 'content': 'What is 604542+6458932?'}]
We add this to our dialog, and now Claude has all the information it needs to answer our question.
msgs += tr
contents(c(msgs, sp=sp, tools=tools))'604542 + 6458932 = **7,063,474**'
contents(msgs[-1])'MySum(val=7063474)'
msgs[{'role': 'user', 'content': 'What is 604542+6458932?'},
{'role': 'assistant',
'content': [{'id': 'toolu_013DMEjVcw57u9LFY5yHua5s',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_013DMEjVcw57u9LFY5yHua5s',
'content': 'MySum(val=7063474)'}]}]
Text editing
Anthropic also has a special tool type specific to text editing.
tools = [text_editor_conf['sonnet']]
tools[{'type': 'text_editor_20250728', 'name': 'str_replace_based_edit_tool'}]
pr = 'Could you please explain my _quarto.yml file?'
msgs = [mk_msg(pr)]
r = c(msgs, sp=sp, tools=tools)
find_block(r, ToolUseBlock)ToolUseBlock(id='toolu_011mHXkfSS4yHSsHP3CNUQZM', input={'command': 'view', 'path': '_quarto.yml'}, name='str_replace_based_edit_tool', type='tool_use')
We’ve gone ahead and create a reference implementation that you can directly use from our text_editor module. Or use as reference for creating your own.
ns = mk_ns(str_replace_based_edit_tool)
tr = mk_toolres(r, ns=ns)
msgs += tr
print(contents(c(msgs, sp=sp, tools=tools))[:128])Here's a breakdown of your `_quarto.yml` file, which is the main configuration file for a **Quarto website project**:
---
## 1
Structured data
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
sp = "Always use your tools for calculations."for tools in [sums, [get_schema(sums)]]:
r = c(pr, tools=tools, tool_choice='sums')
print(r)Message(id='msg_019RqdGbF6jeW1gKqVyVvn5t', content=[ToolUseBlock(id='toolu_01LhByF4X8hG2BRvXcTjJjPr', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-opus-4-5-20251101', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 708; Out: 53; Cache create: 0; Cache read: 0; Total Tokens: 761; Search: 0)
Message(id='msg_019RqdGbF6jeW1gKqVyVvn5t', content=[ToolUseBlock(id='toolu_01LhByF4X8hG2BRvXcTjJjPr', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-opus-4-5-20251101', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 708; Out: 53; Cache create: 0; Cache read: 0; Total Tokens: 761; Search: 0)
ns = mk_ns(sums)
tr = mk_toolres(r, ns=ns)Finding the sum of 604542 and 6458932
Client.structured
Client.structured (msgs:list, tools:Optional[list]=None, ns:Optional[collections.abc.Mapping]=None, sp='', temp=0, maxtok=4096, maxthinktok=0, prefill='', stream:bool=False, stop=None, tool_choice:Optional[dict]=None, cb=None, metadata:MetadataParam|Omit=<anthropic.Omit object at 0x7f3368b0e320>, service_tier:"Literal['auto','standar d_only']|Omit"=<anthropic.Omit object at 0x7f3368b0e320>, stop_sequences:SequenceNotStr[str]|Om it=<anthropic.Omit object at 0x7f3368b0e320>, system:U nion[str,Iterable[TextBlockParam]]|Omit=<anthropic.Omi t object at 0x7f3368b0e320>, temperature:float|Omit=<anthropic.Omit object at 0x7f3368b0e320>, thinking:ThinkingConfigParam|Omit=<anthropic.Omit object at 0x7f3368b0e320>, top_k:int|Omit=<anthropic.Omit object at 0x7f3368b0e320>, top_p:float|Omit=<anthropic.Omit object at 0x7f3368b0e320>, extra_headers:Headers|None=None, extra_query:Query|None=None, extra_body:Body|None=None, timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)
Return the value of all tool calls (generally used for structured outputs)
| Type | Default | Details | |
|---|---|---|---|
| msgs | list | List of messages in the dialog | |
| tools | Optional | None | List of tools to make available to Claude |
| ns | Optional | None | Namespace to search for tools |
| sp | str | The system prompt | |
| temp | int | 0 | Temperature |
| maxtok | int | 4096 | Maximum tokens |
| maxthinktok | int | 0 | Maximum thinking tokens |
| prefill | str | Optional prefill to pass to Claude as start of its response | |
| stream | bool | False | Stream response? |
| stop | NoneType | None | Stop sequence |
| tool_choice | Optional | None | Optionally force use of some tool |
| cb | NoneType | None | Callback to pass result to when complete |
| metadata | MetadataParam | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| service_tier | Literal[‘auto’, ‘standard_only’] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| stop_sequences | SequenceNotStr[str] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| system | Union[str, Iterable[TextBlockParam]] | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| temperature | float | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| thinking | ThinkingConfigParam | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| top_k | int | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| top_p | float | Omit | <anthropic.Omit object at 0x7f3368b0e320> | |
| extra_headers | Optional | None | Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs. The extra values given here take precedence over values defined on the client or passed to this method. |
| extra_query | Query | None | None | |
| extra_body | Body | None | None | |
| timeout | float | httpx.Timeout | None | NotGiven | NOT_GIVEN |
Exported source
@patch
@delegates(Client.__call__)
def structured(self:Client,
msgs:list, # List of messages in the dialog
tools:Optional[list]=None, # List of tools to make available to Claude
ns:Optional[abc.Mapping]=None, # Namespace to search for tools
**kwargs):
"Return the value of all tool calls (generally used for structured outputs)"
tools = listify(tools)
res = self(msgs, tools=tools, tool_choice=tools, **kwargs)
if ns is None: ns=mk_ns(*tools)
cts = getattr(res, 'content', [])
tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)]
return tcsAnthropic’s API does not support response formats directly, so instead we provide a structured method to use tool calling to achieve the same result. The result of the tool is not passed back to Claude in this case, but instead is returned directly to the user.
c.structured(pr, tools=[sums])Finding the sum of 604542 and 6458932
[MySum(val=7063474)]
cToolUseBlock(id=‘toolu_015nDoaR9Y2Y7FXJSUeDfodX’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
| Metric | Count | Cost (USD) |
|---|---|---|
| Input tokens | 6,552 | 0.032760 |
| Output tokens | 1,641 | 0.041025 |
| Cache tokens | 0 | 0.000000 |
| Server tool use | 0 | 0.000000 |
| Total | 8,193 | $0.073785 |
Custom Types with Tools Use
We need to add tool support for custom types too. Let’s test out custom types using a minimal example.
class Book(BasicRepr):
def __init__(self, title: str, pages: int): store_attr()
def __repr__(self):
return f"Book Title : {self.title}\nNumber of Pages : {self.pages}"Book("War and Peace", 950)Book Title : War and Peace
Number of Pages : 950
def find_page(book: Book, # The book to find the halfway point of
percent: int, # Percent of a book to read to, e.g. halfway == 50,
) -> int:
"The page number corresponding to `percent` completion of a book"
return round(book.pages * (percent / 100.0))get_schema(find_page){'name': 'find_page',
'description': 'The page number corresponding to `percent` completion of a book\n\nReturns:\n- type: integer',
'input_schema': {'type': 'object',
'properties': {'book': {'type': 'object',
'description': 'The book to find the halfway point of',
'$ref': '#/$defs/Book'},
'percent': {'type': 'integer',
'description': 'Percent of a book to read to, e.g. halfway == 50,'}},
'required': ['book', 'percent'],
'$defs': {'Book': {'type': 'object',
'properties': {'title': {'type': 'string', 'description': ''},
'pages': {'type': 'integer', 'description': ''}},
'title': 'Book',
'required': ['title', 'pages']}}}}
choice = mk_tool_choice('find_page')
choice{'type': 'tool', 'name': 'find_page'}
Claudette will pack objects as dict, so we’ll transform tool functions with user-defined types into tool functions that accept a dict in lieu of the user-defined type.
First let’s convert a single argument:
_is_builtin decides whether to pass an argument through as-is. Let’s check the argument conversion:
(_is_builtin(int), _is_builtin(Book), _is_builtin(List))(True, False, True)
(_convert(555, int),
_convert({"title": "War and Peace", "pages": 923}, Book),
_convert([1, 2, 3, 4], List))(555,
Book Title : War and Peace
Number of Pages : 923,
[1, 2, 3, 4])
To apply tool() to a function is to return a new function where the user-defined types are replaced with dictionary inputs.
tool
tool (func)
A function is transformed into a function with dict arguments substituted for user-defined types. Built-in types such as percent here are left untouched.
find_page(book=Book("War and Peace", 950), percent=50)475
tool(find_page)({"title": "War and Peace", "pages": 950}, percent=50)475
By passing tools wrapped by tool(), user-defined types now work completes without failing in tool calls.
pr = "How many pages do I have to read to get halfway through my 950 page copy of War and Peace"
tools = tool(find_page)
tools<function __main__.find_page(book: __main__.Book, percent: int) -> int>
r = c(pr, tools=[tools])
find_block(r, ToolUseBlock)ToolUseBlock(id='toolu_01Qi8zXxxzpDnJDoxNKnvuqf', input={'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50}, name='find_page', type='tool_use')
tr = mk_toolres(r, ns=[tools])
tr[{'role': 'assistant',
'content': [{'id': 'toolu_01Qi8zXxxzpDnJDoxNKnvuqf',
'input': {'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50},
'name': 'find_page',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_01Qi8zXxxzpDnJDoxNKnvuqf',
'content': '475'}]}]
msgs = [pr]+tr
contents(c(msgs, sp=sp, tools=[tools]))'To get halfway through your 950-page copy of War and Peace, you need to read **475 pages**.'
Chat
Rather than manually adding the responses to a dialog, we’ll create a simple Chat class to do that for us, each time we make a request. We’ll also store the system prompt and tools here, to avoid passing them every time.
Chat
Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None, sp='', tools:Optional[list]=None, temp=0, cont_pr:Optional[str]=None, cache:bool=False, hist:list=None, ns:Optional[collections.abc.Mapping]=None)
Anthropic chat client.
| Type | Default | Details | |
|---|---|---|---|
| model | Optional | None | Model to use (leave empty if passing cli) |
| cli | Optional | None | Client to use (leave empty if passing model) |
| sp | str | Optional system prompt | |
| tools | Optional | None | List of tools to make available to Claude |
| temp | int | 0 | Temperature |
| cont_pr | Optional | None | User prompt to continue an assistant response |
| cache | bool | False | Use Claude cache? |
| hist | list | None | Initialize history |
| ns | Optional | None | Namespace to search for tools |
The class stores the Client that will provide the responses in c, and a history of messages in h.
sp = "Never mention what tools you use."
chat = Chat(model, sp=sp)
chat.c.use, chat.h(In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Search: 0, [])
chat.c.use.cost(pricing[model_types[chat.c.model]])0.0
This is clunky. Let’s add cost as a property for the Chat class. It will pass in the appropriate prices for the current model to the usage cost calculator.
Chat.cost
Chat.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Chat) -> float: return self.c.costchat.cost0.0
Chat.__call__
Chat.__call__ (pr=None, temp=None, maxtok=4096, maxthinktok=0, stream=False, prefill='', tool_choice:Optional[dict]=None, **kw)
Call self as a function.
| Type | Default | Details | |
|---|---|---|---|
| pr | NoneType | None | Prompt / message |
| temp | NoneType | None | Temperature |
| maxtok | int | 4096 | Maximum tokens |
| maxthinktok | int | 0 | Maximum thinking tokens |
| stream | bool | False | Stream response? |
| prefill | str | Optional prefill to pass to Claude as start of its response | |
| tool_choice | Optional | None | Optionally force use of some tool |
| kw | VAR_KEYWORD |
Exported source
@patch
def _post_pr(self:Chat, pr, prev_role):
if pr is None and prev_role == 'assistant':
if self.cont_pr is None:
raise ValueError("Prompt must be given after completion, or use `self.cont_pr`.")
pr = self.cont_pr # No user prompt, keep the chain
if pr: self.h.append(mk_msg(pr, cache=self.cache))Exported source
@patch
def _append_pr(self:Chat, pr=None):
prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user'
if pr and prev_role == 'user': self() # already user request pending
self._post_pr(pr, prev_role)Exported source
@patch
def __call__(self:Chat,
pr=None, # Prompt / message
temp=None, # Temperature
maxtok=4096, # Maximum tokens
maxthinktok=0, # Maximum thinking tokens
stream=False, # Stream response?
prefill='', # Optional prefill to pass to Claude as start of its response
tool_choice:Optional[dict]=None, # Optionally force use of some tool
**kw):
if temp is None: temp=self.temp
self._append_pr(pr)
def _cb(v):
self.last = mk_toolres(v, ns=self.ns)
self.h += self.last
return self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok,
tools=self.tools, tool_choice=tool_choice, cb=_cb, **kw)The __call__ method just passes the request along to the Client, but rather than just passing in this one prompt, it appends it to the history and passes it all along. As a result, we now have state!
chat = Chat(model, sp=sp)chat("I'm Jeremy")
chat("What's my name?")Your name is Jeremy.
- id:
msg_01WuVmkkqaWBbX5dRtcbd7Yy - content:
[{'citations': None, 'text': 'Your name is Jeremy.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 43, 'output_tokens': 8, 'server_tool_use': None, 'service_tier': 'standard'}
chat.use, chat.cost(In: 60; Out: 26; Cache create: 0; Cache read: 0; Total Tokens: 86; Search: 0,
0.00095)
Let’s try out prefill too:
q = "Very concisely, what is the meaning of life?"
pref = 'According to Douglas Adams,'chat(q, prefill=pref)According to Douglas Adams, 42. Philosophically, it’s subjective—often described as finding purpose, connection, happiness, or creating your own meaning through experiences and relationships.
- id:
msg_014uN2bEuC4RwW1bRtkzuw3X - content:
[{'citations': None, 'text': "According to Douglas Adams, 42. Philosophically, it's subjective—often described as finding purpose, connection, happiness, or creating your own meaning through experiences and relationships.", 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 71, 'output_tokens': 35, 'server_tool_use': None, 'service_tier': 'standard'}
By default messages must be in user, assistant, user format. If this isn’t followed (aka calling chat() without a user message) it will error out:
try: chat()
except ValueError as e: print("Error:", e)Error: Prompt must be given after completion, or use `self.cont_pr`.
Setting cont_pr allows a “default prompt” to be specified when a prompt isn’t specified. Usually used to prompt the model to continue.
chat.cont_pr = "Tell me a little more..."
chat()The question has been approached from many angles:
Philosophical perspectives: - Existentialists like Sartre say life has no inherent meaning—you create your own through choices and actions - Aristotle pointed to eudaimonia—flourishing through virtue and realizing your potential - Absurdists like Camus acknowledged life’s meaninglessness but argued we should embrace it anyway
Religious views generally point to serving a higher purpose, spiritual growth, or union with the divine
Practical takes often center on: - Building meaningful relationships - Contributing something beyond yourself - Pursuing growth and learning - Experiencing joy and reducing suffering
Many find the question itself is what matters—the searching, not a final answer. What draws you to the question?
- id:
msg_01UwCJxPxUMMay2VzC3sUqw4 - content:
[{'citations': None, 'text': "The question has been approached from many angles:\n\n**Philosophical perspectives:**\n- Existentialists like Sartre say life has no inherent meaning—you create your own through choices and actions\n- Aristotle pointed to *eudaimonia*—flourishing through virtue and realizing your potential\n- Absurdists like Camus acknowledged life's meaninglessness but argued we should embrace it anyway\n\n**Religious views** generally point to serving a higher purpose, spiritual growth, or union with the divine\n\n**Practical takes** often center on:\n- Building meaningful relationships\n- Contributing something beyond yourself\n- Pursuing growth and learning\n- Experiencing joy and reducing suffering\n\nMany find the question itself is what matters—the searching, not a final answer. What draws you to the question?", 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 115, 'output_tokens': 172, 'server_tool_use': None, 'service_tier': 'standard'}
We can also use streaming:
chat = Chat(model, sp=sp)
for o in chat("I'm Jeremy", stream=True): print(o, end='')Hello Jeremy, nice to meet you. How can I help you today?
r = chat(q, prefill=pref, stream=True)
for o in r: print(o, end='')
r.valueAccording to Douglas Adams, 42. Philosophically, there's no universal answer—it's likely something you create through relationships, purpose, experiences, and what you find meaningful.
According to Douglas Adams, 42. Philosophically, there’s no universal answer—it’s likely something you create through relationships, purpose, experiences, and what you find meaningful.
- id:
msg_01PphufmGuVQf3zGDpsUjUjr - content:
[{'citations': None, 'text': "According to Douglas Adams, 42. Philosophically, there's no universal answer—it's likely something you create through relationships, purpose, experiences, and what you find meaningful.", 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 55, 'output_tokens': 35, 'server_tool_use': None, 'service_tier': 'standard'}
You can provide a history of messages to initialise Chat with:
chat = Chat(model, sp=sp, hist=["Can you guess my name?", "Hmmm I really don't know. Is it 'Merlin G. Penfolds'?"])
chat('Wow how did you know?')I didn’t actually know - I was just making a playful guess with an unusual name! If that’s really your name, that’s quite a remarkable coincidence. Merlin G. Penfolds is a wonderfully distinctive name.
Is it actually your name, or are you having a bit of fun with me?
- id:
msg_0118MmzzxwxD74GqRFdDYvV8 - content:
[{'citations': None, 'text': "I didn't actually know - I was just making a playful guess with an unusual name! If that's really your name, that's quite a remarkable coincidence. Merlin G. Penfolds is a wonderfully distinctive name.\n\nIs it actually your name, or are you having a bit of fun with me?", 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 58, 'output_tokens': 71, 'server_tool_use': None, 'service_tier': 'standard'}
Chat tool use
We automagically get streamlined tool use as well:
pr = f"What is {a}+{b}?"
pr'What is 604542+6458932?'
chat = Chat(model, sp=sp, tools=[sums])
r = chat(pr)
rFinding the sum of 604542 and 6458932
ToolUseBlock(id=‘toolu_015JXKzAtuJ3iR2WapJdFKmH’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
- id:
msg_011MFyNM4a5qEy6d8M8ndHUr - content:
[{'id': 'toolu_015JXKzAtuJ3iR2WapJdFKmH', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
tool_use - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 620, 'output_tokens': 72, 'server_tool_use': None, 'service_tier': 'standard'}
Now we need to send this result to Claude—calling the object with no parameters tells it to return the tool result to Claude:
chat()604542 + 6458932 = 7,063,474
- id:
msg_015Yoq6baMYzc3EsXq1vbZ7W - content:
[{'citations': None, 'text': '604542 + 6458932 = **7,063,474**', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 713, 'output_tokens': 20, 'server_tool_use': None, 'service_tier': 'standard'}
It should be correct, because it actually used our Python function to do the addition. Let’s check:
a+b7063474
Let’s try the same thing with streaming:
chat = Chat(model, sp=sp, tools=[sums])
r = chat(pr, stream=True)
for o in r: print(o, end='')Finding the sum of 604542 and 6458932
The full message, including tool call details, are in value:
r.valueToolUseBlock(id=‘toolu_014qU6kxpgXtQLVmd1Hmqx5t’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
- id:
msg_01MDWAgzPkFiMgPv78wef8WA - content:
[{'id': 'toolu_014qU6kxpgXtQLVmd1Hmqx5t', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
tool_use - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 620, 'output_tokens': 72, 'server_tool_use': None, 'service_tier': 'standard'}
r = chat(stream=True)
for o in r: print(o, end='')604542 + 6458932 = **7,063,474**
r.value604542 + 6458932 = 7,063,474
- id:
msg_01BBBcUrMHdq7HdBiN95RHWA - content:
[{'citations': None, 'text': '604542 + 6458932 = **7,063,474**', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 713, 'output_tokens': 20, 'server_tool_use': None, 'service_tier': 'standard'}
The history shows both the tool_use and tool_result messages:
chat.h[{'role': 'user', 'content': 'What is 604542+6458932?'},
{'role': 'assistant',
'content': [{'id': 'toolu_014qU6kxpgXtQLVmd1Hmqx5t',
'input': {'a': 604542, 'b': 6458932},
'name': 'sums',
'type': 'tool_use'}]},
{'role': 'user',
'content': [{'type': 'tool_result',
'tool_use_id': 'toolu_014qU6kxpgXtQLVmd1Hmqx5t',
'content': 'MySum(val=7063474)'}]},
{'role': 'assistant',
'content': [{'citations': None,
'text': '604542 + 6458932 = **7,063,474**',
'type': 'text'}]}]
Let’s test a function with user defined types.
chat = Chat(model, sp=sp, tools=[find_page])
r = chat("How many pages is three quarters of the way through my 80 page edition of Tao Te Ching?")
rToolUseBlock(id=‘toolu_01G7tiwtN5oecs62w1ppUyoP’, input={‘book’: {‘title’: ‘Tao Te Ching’, ‘pages’: 80}, ‘percent’: 75}, name=‘find_page’, type=‘tool_use’)
- id:
msg_01WLgnUbhPiUBejZX8VMqMa3 - content:
[{'id': 'toolu_01G7tiwtN5oecs62w1ppUyoP', 'input': {'book': {'title': 'Tao Te Ching', 'pages': 80}, 'percent': 75}, 'name': 'find_page', 'type': 'tool_use'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
tool_use - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 730, 'output_tokens': 86, 'server_tool_use': None, 'service_tier': 'standard'}
chat()Three quarters of the way through your 80-page edition of Tao Te Ching is page 60.
- id:
msg_017CZs1H7q24jK1fDMfdGpSZ - content:
[{'citations': None, 'text': 'Three quarters of the way through your 80-page edition of Tao Te Ching is page 60.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 830, 'output_tokens': 28, 'server_tool_use': None, 'service_tier': 'standard'}
Exported source
@patch
def _repr_markdown_(self:Chat):
if not hasattr(self.c, 'result'): return 'No results yet'
last_msg = contents(self.c.result)
def fmt_msg(m):
t = contents(m)
if isinstance(t, dict): return t['content']
return t
history = '\n\n'.join(f"**{m['role']}**: {fmt_msg(m)}"
for m in self.h)
det = self.c._repr_markdown_().split('\n\n')[-1]
if history: history = f"""
<details>
<summary>► History</summary>
{history}
</details>
"""
return f"""{last_msg}
{history}
{det}"""# TODO: fix history formatchatThree quarters of the way through your 80-page edition of Tao Te Ching is page 60.
► History
user: H
assistant: {‘id’: ‘toolu_01G7tiwtN5oecs62w1ppUyoP’, ‘input’: {‘book’: {‘title’: ‘Tao Te Ching’, ‘pages’: 80}, ‘percent’: 75}, ‘name’: ‘find_page’, ‘type’: ‘tool_use’}
user: 60
assistant: Three quarters of the way through your 80-page edition of Tao Te Ching is page 60.
| Metric | Count | Cost (USD) |
|---|---|---|
| Input tokens | 1,560 | 0.007800 |
| Output tokens | 114 | 0.002850 |
| Cache tokens | 0 | 0.000000 |
| Server tool use | 0 | 0.000000 |
| Total | 1,674 | $0.010650 |
chat = Chat(model, tools=[text_editor_conf['sonnet']], ns=mk_ns(str_replace_based_edit_tool))When not providing tools directly as Python functions (like sum), you must create and pass a namespace dictionary (mapping the tool name string to the function object) using the ns parameter to methods like mk_toolres or toolloop. toolslm cannot automatically generate the namespace in this case. For schema-based tools (i.e., Python functions), claudette handles namespace creation automatically.
r = chat('Please explain very concisely what my _quarto.yml does. It is in the current path. Use your tools')
find_block(r, ToolUseBlock)ToolUseBlock(id='toolu_014SeCXeGR9k7UGovNBLMR1K', input={'command': 'view', 'path': '_quarto.yml'}, name='str_replace_based_edit_tool', type='tool_use')
chat()This _quarto.yml configures a Quarto website with:
Project: Website type, includes
.txtfiles as resources, previews on port 3000 without auto-opening browserHTML Format:
- Cosmo theme + custom CSS
- Table of contents, code tools, and styled code blocks (light blue left border)
- Custom wide layout (1800px body width)
- Keeps intermediate markdown files
- Also outputs CommonMark format
Website Features: Twitter cards, Open Graph metadata, GitHub issue links, navbar with search, floating sidebar
External Config: Pulls additional settings from
nbdev.ymlandsidebar.yml
- id:
msg_01Fri1zQ5FiEx9VNFxAbPZhm - content:
[{'citations': None, 'text': 'This_quarto.ymlconfigures a **Quarto website** with:\n\n1. **Project**: Website type, includes.txtfiles as resources, previews on port 3000 without auto-opening browser\n\n2. **HTML Format**: \n - Cosmo theme + custom CSS\n - Table of contents, code tools, and styled code blocks (light blue left border)\n - Custom wide layout (1800px body width)\n - Keeps intermediate markdown files\n - Also outputs CommonMark format\n\n3. **Website Features**: Twitter cards, Open Graph metadata, GitHub issue links, navbar with search, floating sidebar\n\n4. **External Config**: Pulls additional settings fromnbdev.ymlandsidebar.yml', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1591, 'output_tokens': 168, 'server_tool_use': None, 'service_tier': 'standard'}
Images
Claude can handle image data as well. As everyone knows, when testing image APIs you have to use a cute puppy.
# Image is Cute_dog.jpg from Wikimedia
fn = Path('samples/puppy.jpg')
Image(filename=fn, width=200)
img = fn.read_bytes()Claude expects an image message to have the following structure
{
'role': 'user',
'content': [
{'type':'text', 'text':'What is in the image?'},
{
'type':'image',
'source': {
'type':'base64', 'media_type':'media_type', 'data': 'data'
}
}
]
}msglm automatically detects if a message is an image, encodes it, and generates the data structure above. All we need to do is a create a list containing our image and a query and then pass it to mk_msg.
Let’s try it out…
q = "In brief, what color flowers are in this image?"
msg = mk_msg([img, q])c([msg])The flowers in this image are purple/lavender (they appear to be asters or similar daisy-like flowers).
- id:
msg_012X8AHYbG5nBrHP1FwzTNYX - content:
[{'citations': None, 'text': 'The flowers in this image are **purple/lavender** (they appear to be asters or similar daisy-like flowers).', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 30, 'server_tool_use': None, 'service_tier': 'standard'}
You don’t need to call mk_msg on each individual message before passing them to the Chat class. Instead you can pass your messages in a list and the Chat class will automatically call mk_msgs in the background.
c(["How are you?", r])For messages that contain multiple content types (like an image with a question), you’ll need to enclose the message contents in a list as shown below:
c(["How are you?", r, [img, q]])c = Chat(model)
c([img, q])The flowers in this image are purple/lavender (they appear to be asters or similar daisy-like flowers).
- id:
msg_012X8AHYbG5nBrHP1FwzTNYX - content:
[{'citations': None, 'text': 'The flowers in this image are **purple/lavender** (they appear to be asters or similar daisy-like flowers).', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 30, 'server_tool_use': None, 'service_tier': 'standard'}
contents(c.h[0])'*Media Type - image*'
cThe flowers in this image are purple/lavender (they appear to be asters or similar daisy-like flowers).
► History
user: Media Type - image
assistant: The flowers in this image are purple/lavender (they appear to be asters or similar daisy-like flowers).
| Metric | Count | Cost (USD) |
|---|---|---|
| Input tokens | 110 | 0.000550 |
| Output tokens | 30 | 0.000750 |
| Cache tokens | 0 | 0.000000 |
| Server tool use | 0 | 0.000000 |
| Total | 140 | $0.001300 |
Unfortunately, not all Claude models support images 😞. This table summarizes the capabilities of each Claude model and the different modalities they support.
Caching
Claude supports context caching by adding a cache_control header to the message content.
{
"role": "user",
"content": [
{
"type": "text",
"text": "Please cache my message",
"cache_control": {"type": "ephemeral"}
}
]
}To cache a message, we simply set cache=True when calling mk_msg.
mk_msg(['hi', 'there'], cache=True){ 'content': [ {'text': 'hi', 'type': 'text'},
{ 'cache_control': {'type': 'ephemeral'},
'text': 'there',
'type': 'text'}],
'role': 'user'}Claude also now supports smart cache look-ups, so it’s very simple to keep an entire conversation in cache by constantly telling it to update the cache with the latest message. To do this, we just need to set cache=True when creating a Chat.
chat = Chat(model, sp=sp, cache=True)Caching has a minimum token limit of 1024 tokens for Sonnet and Opus, and 2048 for Haiku. If your conversation is below this limit, it will not be cached.
chat("Hi, I'm Jeremy.")Hi Jeremy, nice to meet you! How can I help you today?
- id:
msg_0199pzvQJUHhmbFFY4YmLY1W - content:
[{'citations': None, 'text': 'Hi Jeremy, nice to meet you! How can I help you today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 20, 'output_tokens': 18, 'server_tool_use': None, 'service_tier': 'standard'}
chat.useIn: 20; Out: 18; Cache create: 0; Cache read: 0; Total Tokens: 38; Search: 0
Note the usage: no cache is created, nor used. Now, let’s send a long enough message to trigger caching.
chat("""Lorem ipsum dolor sit amet""" * 150)It looks like you’ve sent a block of repeated placeholder text (“Lorem ipsum”). This is typically used in design and publishing as dummy text.
Is there something specific I can help you with, Jeremy? Whether it’s a question, a project, or just a conversation—I’m here!
- id:
msg_01G2F57sM2WgafJuUTXnpD8x - content:
[{'citations': None, 'text': 'It looks like you\'ve sent a block of repeated placeholder text ("Lorem ipsum"). This is typically used in design and publishing as dummy text.\n\nIs there something specific I can help you with, Jeremy? Whether it\'s a question, a project, or just a conversation—I\'m here!', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1089, 'output_tokens': 62, 'server_tool_use': None, 'service_tier': 'standard'}
chat.useIn: 1109; Out: 80; Cache create: 0; Cache read: 0; Total Tokens: 1189; Search: 0
The context is now long enough for cache to be used. All the conversation history has now been written to the temporary cache. Any subsequent message will read from it rather than re-processing the entire conversation history.
chat("Oh thank you! Sorry, my lorem ipsum generator got out of control!")Ha! No worries, those things can have a mind of their own sometimes. Glad we got that sorted out.
So what can I actually help you with today?
- id:
msg_01FUPvZhmuDjo13Y7DZNm33U - content:
[{'citations': None, 'text': 'Ha! No worries, those things can have a mind of their own sometimes. Glad we got that sorted out.\n\nSo what can I actually help you with today?', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 1169, 'output_tokens': 39, 'server_tool_use': None, 'service_tier': 'standard'}
chat.useIn: 2278; Out: 119; Cache create: 0; Cache read: 0; Total Tokens: 2397; Search: 0
Extended Thinking
Claude >=3.7 Sonnet & Opus have enhanced reasoning capabilities for complex tasks. See docs for more info.
We can enable extended thinking by passing a thinking param with the following structure.
thinking={ "type": "enabled", "budget_tokens": 16000 }When extended thinking is enabled a thinking block is included in the response as shown below.
{
"content": [
{
"type": "thinking",
"thinking": "To approach this, let's think about...",
"signature": "Imtakcjsu38219c0.eyJoYXNoIjoiYWJjM0NTY3fQ...."
},
{
"type": "text",
"text": "Yes, there are infinitely many prime numbers such that..."
}
]
}Note: When thinking is enabled prefill must be empty and the temp must be 1.
think_md
think_md (txt, thk)
def contents(r, show_thk=True):
"Helper to get the contents from Claude response `r`."
blk = find_block(r)
if show_thk:
tk_blk = find_block(r, blk_type=ThinkingBlock)
if tk_blk: return think_md(blk.text.strip(), tk_blk.thinking.strip())
if not blk and r.content: blk = r.content[0]
if hasattr(blk,'text'): return blk.text.strip()
elif hasattr(blk,'content'): return blk.content.strip()
elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
return str(blk)Let’s call the model without extended thinking enabled.
chat = Chat(model)chat("Write a sentence about Python!")Python is a versatile, beginner-friendly programming language known for its clean, readable syntax and wide applications in web development, data science, artificial intelligence, and automation.
- id:
msg_01HosWqAh4tvzn4S6uSDaKFc - content:
[{'citations': None, 'text': 'Python is a versatile, beginner-friendly programming language known for its clean, readable syntax and wide applications in web development, data science, artificial intelligence, and automation.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 13, 'output_tokens': 38, 'server_tool_use': None, 'service_tier': 'standard'}
Now, let’s call the model with extended thinking enabled.
chat("Write a sentence about Python!", maxthinktok=1024)Python is a high-level, interpreted programming language created by Guido van Rossum in 1991 that has become one of the most popular languages in the world due to its simplicity and powerful capabilities.
Thinking
The user is asking me to write a sentence about Python again. I’ll provide a different sentence this time to give them some variety.- id:
msg_019rersaZdL6583EEqwXWtX8 - content:
[{'signature': 'Eq0CCkYIChgCKkBqkeJ1tZhrhvIqbVoNsgLCVbc14UtYRGHzdP3wDVTCtCf0vAiquOMlp4TKBQonEgNtBFPrmQU+yYpk0uZjOPFNEgzr/55IIhxCJbKvbUYaDKHCNA+1fvatxV54myIwGEH/FhcgaNQtnK+IH1HcCIKGJtcivRukB6dBxQ4rl1upO5I+hTf7/Q2xp3Ez1bT3KpQBn6/42HPxVdGnM5DhphDnS1scfssfqRFxENBxIh/0zCeOIEpaH5UExfSKLhlMdwY0dr7aqC6kVBcoM9x5t6hfbjB68RZ/TnT5wBxBt3g30kievDjt8SX6PkUb2aPvZvdzFaXZJoH2BN5gbwwrob/1qRU6I/MzcU5O+fA/0Sz90UjeM1UCpaA2vR4IQu6yw4KHQSkn0RgB', 'thinking': "The user is asking me to write a sentence about Python again. I'll provide a different sentence this time to give them some variety.", 'type': 'thinking'}, {'citations': None, 'text': 'Python is a high-level, interpreted programming language created by Guido van Rossum in 1991 that has become one of the most popular languages in the world due to its simplicity and powerful capabilities.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 89, 'output_tokens': 83, 'server_tool_use': None, 'service_tier': 'standard'}
Server Tools and Web Search
The str_replace special tool type is a client side tool, i.e., one where we provide the implementation. However, Anthropic also supports server side tools. The current one available is their search tool, which you can find the documentation for here. When provided as a tool to claude, claude can decide to search the web in order to answer or solve the task at hand.
search_conf
search_conf (max_uses:int=None, allowed_domains:list=None, blocked_domains:list=None, user_location:dict=None)
Little helper to create a search tool config
Similar to client side tools, you provide to the tools argument in the anthropic api a non-schema dictionary with the tool’s name, type, and any additional metadata specific to that tool. Here’s a function to make that process easier for the web search tool.
search_conf(){'type': 'web_search_20250305', 'name': 'web_search'}
The web search tool returns a list of TextBlocks comprised of response text from the model, ServerToolUseBlock and server tool results block such as WebSearchToolResultBlock. Some of these TextBlocks will contain citations with references to the results of the web search tool. Here is what all this looks like:
{
"content": [
{
"type": "text",
"text": "I'll check the current weather in...",
},
{
"type": "server_tool_use",
"name": "web_search",
"input": {"query": "San Diego weather forecast today May 12 2025"},
"id":"srvtoolu_014t7fS449voTHRCVzi5jQGC"
},
{
"type": "web_search_tool_result",
"tool_use_id": "srvtoolu_014t7fS449voTHRCVzi5jQGC",
"content": [
"type": "web_search_result",
"title": "Heat Advisory issued May 9...",
"url": "https://kesq.com/weather/...",
...
]
}
{
"type": "text",
"citations": [
{
"cited_text": 'The average temperature during this month...',
"title": "Weather San Diego in May 2025:...",
"url": "https://en.climate-data.org/...",
"encrypted_index": "EpMBCioIAxgCIiQ4ODk4YTF..."
}
],
"text": "The average temperature in San Diego during May is..."
},
...
]
}Let’s update our contents function to handle these cases. For handling citations, we will use the excellent reference syntax in markdown to make clickable citation links.
find_blocks
find_blocks (r, blk_type=<class 'anthropic.types.text_block.TextBlock'>, type='text')
Helper to find all blocks of type blk_type in response r.
blks2cited_txt
blks2cited_txt (txt_blks)
Helper to get the contents from a list of TextBlocks, with citations.
contents
contents (r, show_thk=True)
Helper to get the contents from Claude response r.
chat = Chat(model, sp='Be concise in your responses.', tools=[search_conf()], cache=True)
pr = 'What is the weather in San Diego?'
r = chat(pr)
rHere’s the current weather in San Diego:
Today (Tuesday, December 2nd) will be partly cloudy with a high around 65°F and winds from the SW at 5 to 10 mph. 1 Tonight, expect cloudy skies with a low around 54°F. 2
Current conditions show it’s clear at 54°F with 90% humidity. 3
Looking ahead: Tuesday has a 32% chance of late showers, followed by partly cloudy conditions Wednesday through Thursday. 4
- id:
msg_01RgMSjtJD3yj9dLxQ5AW7qi - content:
[{'id': 'srvtoolu_011k5YzwkJEnWpXu5mJX4rJZ', 'input': {'query': 'San Diego weather today'}, 'name': 'web_search', 'type': 'server_tool_use'}, {'content': [{'encrypted_content': 'ErUZCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDIAHrAnqiMR1lxzcaRoMpvePaeD41qawN8k9IjDXd30oKk+nNL0dew0xw5prCxcx1CsrqKTMYhgXrjMU9cSqTTsm5gXNWkgHeg2CbPoquBgrZgHdrD9ivYR0RlYHERgaKOd6zsqsK1xwPL56QBB1ztZfWpwA3xhdeMwMMRrk3iEDUvEuunYEQHUQhB8JhEIIuySFDJwwXMGsNVXgJJImwffy3mRofoVkLjMkaaQQZpLQnhZFZJXIBX3+ta2v5uYsN8Ot+6mUjvi+RDvU3+BwjCgEveVMftjNxq2yNqwuCRx0Vtxw5p126P7Z6OEWkWkpgpTpyhR82wDQ3Gzr9RW07sXJPNllDxLwGiB/6effxC9ceC3xkeX2I6qSneDHlOmznNNVWsvBNa3q6ROIn8yLX/KHwGK95cxdMXRRPLO4/ETUL2jXrcjpriBFfydadt+yC5HtNFj0nZk6tprTJMrtHl2A9Ef4PZhNRnV0FxmNPgs4zUuIXolJigGpxi90nQgsXujT9XJrCQEg6HFYAi7duEsXq3Jt9iETwQJxtM74EQ6i5bUM1Xx6IZsHPrIhgQHGPq0L2IJk63JA0Ecawo5825IL46UISapVLZW228sU7oXxNQt+kU/Ip13tUOUDtFwQd//JlTaXNqhMCyhAbBJYVp7MmH07BF/XH3+EULjvMw4SpsuO0OjCCdK2Iu+50N0DpgIpicSd2HLIeTZNC1Oqbk6HMVu+ebvcum2RJENXxOPdz5OFWboXa954sunEGTvwibqGiMNHwFt0JZZjQq8suFcYqixCMa+DLsy9xGPc51dJlI9kzQCdBud+Yv2qC4mTjrCq9yGh0n3/af+3pZQ2bstbN/1M+rZ0brk7ulGnGhyyG4q/KWdrCy40+B9fStGnJiLJS5cJAt4mDW7j6yLA9y8mdx2vTrIuic3BT3UFLjC7GwT1TcO0XdrBVKfq7MOyq9VZDMbfzpeqAH+3tRsMnCqwjQlfstlu0z1tGTZ4DIaF4pCJEsHSLkpUxq8dwlFrDMe/zXc/B8qSKuImEkvv+rTAqrV4nmRT6/WwEQxkzjjAfyzV/VYa3tId3msrzxil++xFe6XiWi1BgvSPLRPI+uYtOeYVTNGPG6ILFzo2TZg5K8pruE/bJFHJTq2UnAmeVcXTSW7qX+dAHKRwPQJWvUuZdasgllS59B7y3kB6RwKSZxcVfeH5pRa6Kh6NIEAhn+rAPdOumh+NSPgztcIxltxdhfAf5U6ChDtnOKGihxcaaQz1uJyMNQpXLe7a/a3SoVmQCA4afaoE5S3MF/LqqXo53g+9v0O9vR2mycTyYZYUEnTnkLSRoWCyddwx3Svmnm+/AV1rewsCFJTqXzLlGYua8uuttuP2caeHJm39Xtx6o3oaf2nau+cSxcsPxZHI7lI6pdBxGHK2yFcjk/6Wf0kDcxqxdV6NV6eONcZuhHfh6tM1tez8qqLwFK/WjwfUb4dfn0aaosdfLnyY0TFfhnyY0kBU6nQiMbILCe0Bs3/66TyxFQAmznCCZPPjCdymF9eiORSR5vDYYe88Lx7ybbEzv8MBrZHmxbzdUjvYvIc0y7P7Fd9qaoVC5p6z4Nfd9oT2Y0OtdlxIFfI6LZ6+GqJg7u3kH6mxGWoXKkk+Bf3Dn9bYemz/m5TTqXgNNmUFQ09AsnTmMuJ13wYP7gR/OLbjUU2KRNru9wHoD7R2KVPaYZoF259GjkHzpUvq2AmbNNk2+rSS0zqiEBAq7ZDhP3wOH7Vk0o3AfTIy8dwMcX9f4GqNUiW+sy6hpXww25/iBVADxFf8jH+U1KOT0x+8xB3AlMmnppUUyYnbemnwyZXxMOXtkhLCFgPhklp916mG1A6Zh5uUWTLKN+3yOSx0J+9gH7bAuNETk8XxtEuUCbcZqWXYnHWw8GRaG48SUY/OuBpc7YeM58PjWLNcGv7Co2ueUH91OxflFCYcP6d7MG11iI0Hm6BmsHGFKEc9tUYDjn1Nc7dfJWth3cdWt/F66T0FlyZxHuvlwoLSlnE83bk2JWnpxxwVvhlMli3+FH/18Puj+TK0Oza8njBjdAvu90CKJ/Xr9xqn9yqz/trmcUQAtC0MU/03oNPqbBiQbkfyr8K/mxLovabD4ESL64rznru9Dq37JiZxs0RbsPDiCpGSBH8R9eDAEyu/nHOIbIVrVinxtJI8sHA6Hvj74K+/oDCnw9RSrujxIh6XY8eo7NNrCLjQBF0k5SSB+oh1Ff/d2AKmwW7uU4Nf6pnLVLdHxXgfrxdt2wKQFBXPceyGHrpuYK+e4F5XwWu4IlNNj+PKsSpziEJGv6BWmh4Ny0GF2w6gWrnr0M37KFJL/b47iNhVx3e9cyIHFFVsDrZ+IGWX14i4p2lIbOGZmZxeWBeQDoPuLjU2wiJkN92q63INlLBHnVmB2Awuz4Dj5qUmavNVY6r1U5wYpnoI2CIzf6xn6ZqDOsyU4suEoyj7Hc2WLW0KA6NY8iCE2gSmMbLzRTkLcWWnni5SGWVDajp9/xYpj/re6AQhCiF1v9qUobH13e9HsWIqx4uMwh6U0Gkunfb7Mq2ksRkZYXX5sgTD41B1nVFWMvPwtKG9p3NbIQyLTztySkyOa9Xtvxy6/rNLQQVGVXG3ttw63N5Ul4twVnSl16gMBOkCUzNZQPCZZ7sR1PkL5pwxCSF+aMP8czVhEzkrpC+aQcOGXipk7FJyCmih1f8QSPMeVp1LgBFD9UuUTQlhoUbwtkzUqPqmV9htmMd9q2rAj5R6JZ/qMA50hYPjUfXVUxXmFVlgerpQxN5dddyeej2uHjxh2tKRMGRdEkviACRqU8ZWQdyIrqD4W/OlHQoMr5WH/QccEwo5+iT450eKmruyFfo9WvXNPBbw+qs5MrafcqvGeqdnf8qUem9E5OyP9fRYQcNdclKMWyvax7zak+p6iLuSZsd4kcUCnotQl6BRqOeJK8C5xipkG+Pg0CCoHKytYR4qrt+1AM/TK3E1LtpyplZjIkY3kkJ/isBVWZ39oprNIL+Rw/PHcFRc3c/WFiCz2Lz1QlnrVJ1WrHHOPT3od7WPx+WjrUn6Unl+QK6d1uJPMRisSguVUxKSZ5ZQNaAQuHHlqCHVZBD5U2otyWqMPL5wD/VCpWpg67rE5DqwElWv6aIYL9eMraNmYMhW4Cbn9zcm3YUdBZ/02gjNU8yMVCLgH+twqtgFB9G6+fAZzFV8d9YZO3g1hHA/ELCzcWL/7+0WhHUI2AMg4DG8BbXSacgFA5zuXfmk8skefprQ93DtWXasHd2Kx8MJsNhW0OZ8xfM/qr591Hv/IRAksQ7jAQoUozWyHgWe/auU3YvG2C5G7WOw5UiAoopOctdGXPhkj06xKeLQU4V4wwqnkaBQBDBwXmGrNwz5rqoD+0Ur415IBPHz5DZ6NKe1ohg3SXrXz6iVffH/IWNjcBiXLmIHnrY1gZC//3nFxZOUWRhIqanfXpPmQTNkjku5lLX1Eg7/KcRWs2U/a53Ahw1YCaC0DuG0cRg1qOYI9KXbkYJAr58NUgaHc91C0qBEaZFlRuHc4aFStck8ivKjp+7wtUwYaVtgo2lzB4HJPMg49xwMsr68oUkZVcplQsePZfE65FY9pplXWDrtmMpJ81FSlIFEwyCiqfubxRj+rfJM1DZ7tpK/E/VvSfEjJ9KbGB6YDPmJYYEnsSpCK34kezCASBj31GMsw8w7rOhLElbiRN2tIkEWeXiKMiTCrR4IypeVlpMJFxsGs5yGgdXfAmqoBrW7vyHHlZCr564gqHFE6TGM8fQmgBZD2tBUDJQIFfG/0BcEW3ADSQtRShzOq7RhKhEAVwHlccSdP3IHYhcjVng+dx6c44+rcs3L5Emknw6KftTYgqRbHalzX1yM/ztikmCGSnnV10wG9CVXgOd1bn6Y4M16rVGORDdLNiDZCjjqff5ny3djUP/+PeZ8/oKor2RLW/+XnDT9TlH+s/A0F6ibDm9rAPZj9POEpy45CfBKYtae+PHHY/DwD93VwQiNAq/Itzvugp/xSLdyDULJa2UWrske8vHeJBsY+CtNCGCNT0vAyOGCj08q6GYt9qJRfT59CpmyPl18rxPBz0Gc4AF2gs74rD+3sZqv4+Y+K0NTeS8I2TDoqn9iA1yw/PbvFXIGQmCXSXRIxTHm933GSnEnzX/ezH5X3pXTw2DSg00b0c51DtaepSGJlp91sOaQemWDVya+gQ7lfI/B8YLmYfz5bDsHnKYRZrM4BUjRG+P/TnNrPAT+85j5zf7u8ysB894p0uuUnxgD', 'page_age': '1 week ago', 'title': 'San Diego, CA Weather Forecast | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/weather-forecast/347628'}, {'encrypted_content': 'Er0ECioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDFDe3BkMT5MQX52mUhoM/9n8GRyu7Bb8P8+fIjDiLMaRAL8R2Q234V6Xor92qPOLBZjxsM2m6VgxPVppkkTzY0Zjq+7woUSB2br+ZUoqwAMHfoMP4XnPBHK47ACt6eJrrJRdkIrtrlxfZAUcE4cpvP6p/YTl/+6PACvmklESdfexmn52HoSOsE68c9nBno6/ztAw9Zyz4AH9X/dNM1n0h5qoca05078Vprr0zsoC8vC4hlIEiDDAdNh8NWZoEIncUh6NT/F+5mNUHaXMxiiAKEJgt/clrCiW1EpAUlIH/Eo3TcjwJQ+yw3zsP0nv+gXiUev75SLOmMoSgV/TNL5Kr670GbJAGYk6L1QPoATeTKg0IgCN7jCONifdkAloustkJHCq2Q8ksjnMWHhNoQmrUpeGNIZvYBAuVqO+si8clA8xOGGJAD2VJb5qovWIt3D3r3qqHlo5nGVUC7Gcv2Z4j4sElGHWrLPVB5htFKBwEUpW5mAoTmI8T+bXJ/DJbHP0s8qYQR+vOiPVD6YI/IKxC42+WFtNX3C3b+vl+0TkYeffmL6ZWY0O366NL/xDrUMazNUgUuOkZV6g8te8xB0+qquNld3Ly2KBBPE2eWcK43qNmHBHJJgBWYKW/6TRt75NPd704OCSP5DfIl5ddStbi5IO561grlWSWCEG9SiKmamUrVPnLqYpxY87X31nJwAdGAM=', 'page_age': '1 week ago', 'title': 'San Diego weather forecast – NBC 7 San Diego', 'type': 'web_search_result', 'url': 'https://www.nbcsandiego.com/weather/'}, {'encrypted_content': 'EpoHCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDCQSaOhTnM3NJXJiihoMCC2Wd6AiDNXFxd3KIjAoyQkcERKYWXfkD4RgKf4eRDobcQLSNSQHohGGZx5i+NxwcJZqANo9F2HZC1BXs8EqnQYIhi7XQnR8pDLz9esENEnbmYBNKQQuKDqMFP+zr4VRnqXmclQsq5eEFRyJado88U9UsxPor2gPhb3fwHfA0XXheKZUUfBEIjQ1clG0ZK0P5wdbMB/p78/8T+4uaFFmcVNMyMJ/Qb5KYJVf2Ziejkv71t7KquWb2EZ7xIqmu/NXSyEKaKIIaj4xt6kRakVg82Ruzbd6oaBKnfQovthlWpO6cCfasfUL8Z8+TShcopa5olb317hTarjHB9dsbSRTq/D9Zu4TNhMJwfe56T3PA9xBQBx6tjsyDXprTq6q/zelC+q+xfJVbsj4F8QIvTpNwkSv65t1kg7P41+vWcBGZERIbNUtlZMqlDx5CZLDscx/ceqInuL3TiIj4FkqGTcMfq7R6RwG0ftjsaJqxVR501J/jOKTtQtwIMUKB1wqyLQCXaecvVTWFTScB/yd5miAZySGwsQcwIQlETYeMW0PlPUdfKWQbvR127SuBQDj94+50mDN/uY5OpZFEfQwmZXwtvVg+lfA26wrA9FPlz8CN8TXQbXwFNUOjWZwzVdDTA+nszVCM3YD6tMIRxUSu1b2MmiTQi4qV1q62sHoKOLZo7eeHEvFClnaVifAcK4ittDl8ohgNCx2JiBfhObbOBGjSTFh1AfmKxot6Ny4A791fKQShkkqxIVQviC7qF4plhX4Ie9m2oxOaxkESVbTMSWGBXVl+fR8WDH7ia1qXawIWPrDLO56HKpiaQtQQCJHGYk/Tek1PDN1+Emd/h7d/F60vDIz1F870LIbgNpQa8t46Pu3EYi4ahOANmi1dDOSr48XASRohyetpXYWzMM0eBpQgyOrtHcgtB+kOxtUJIj8RHQkCQnnTR6egeYHg4sdqBp1I9rElZihInQwubq1aQt6CiYvwApO+AnZMFe9bU6VUF8oxE08n5Wc3K/f7q0e6VqJnA0+H2lSqYsN24wBW4qja2Tw32bDgDt+zHGLs0DZpbe+feJub+PmTb8igriATJ5ZpqZfm8f1aWwNj6bdHFPltQfpHWl1ohz3QhkWqDvsi3obL3T5Ao7bamLEGijighgD', 'page_age': None, 'title': 'San Diego, CA Hourly Weather Forecast | Weather Underground', 'type': 'web_search_result', 'url': 'https://www.wunderground.com/hourly/us/ca/san-diego'}, {'encrypted_content': 'EsoCCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDHtHobb3pNC9BZfI+RoMibISNhV/d4EEFEoXIjB4Vlt6g2OsJP+rtjRbbJFSxuFfviMO5vzP+zEmItq2LkYe4B1wkR2vRRS8v0Qw/3QqzQEuPuPDVvvx6qP4Qlf5zwxOifSfV8Qg2/aMp8p4/Q5bzJoNDWwnToDsP7s+uDGL0j5PF7rq5A1vuVKFTw8Gp2uUQZMGbXr79NcauYBXhwVXuRrc4XrC+/FHPCIm0tzY67I3QiEc8jxxZ9Dt6ovpXwbi5CT1k3Rab+OkLWj1QeHqOHnPcZ49i6JSNKQJRwVqE7LJHd9CGfboGanGVB3j5scbdtLpJbGKe+RzCeIJXxFTHWQCA7hEsXSLlwFgEZ11fKZhc0T7qe0sTEZkiTXyGAM=', 'page_age': '2 days ago', 'title': 'San Diego, CA', 'type': 'web_search_result', 'url': 'https://www.weather.gov/sgx/'}, {'encrypted_content': 'EuUJCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDJJrBgWz2OjQaDYBZxoMYu2NlJr5DcX5QahIIjByshW2HIOlTKZZBj6q3r4Y/ZGMpciBM+6DbPTXWcT7Z4C7PloCSqx2E6C7GaqT5J0q6AgrI3KH3Zk5OZ6IHNhI4tTqDKBrXFWwAlsWFSUhoaKACQd86Hbi7MNy+njwdrC2fYxmvL9/hkfjMvfqZzwLqDMJgll4r6iZWqFjqZPuSfY7pSINbh1qjUQujGm9XiclyD0vqSfIfUlrkqU9cBmoKrXNEWb43+ZKaX3awaASpnwSdd3FraZWET2z4InQwEwWqUs80kVzzXnIB2L8jqcNn8HsSibqpVeKUE59CLNISLl01WEtXkStpz+OlSFimW33kH7H7GGLpjYJFWb3aGVu7roEUwsdyyW6q0KNVRu4i9TajapVe5GrWj4WYjAhJCFytO0200k1TrEwHJs/AnMFKx6GXmpffAyfKGeZmYlHaI4gVHvSl3Sct9EVmBggPQKvwwmLAnRP65tPJWu27uoTpNtjfo4JKLGJK8l1RgCQQjDHn4DnpxYFT6uMh9lGwNQpd8nOK4fUsHMJSFK3sdWSTWTQZ1NLfM0crxegnzlG0DOhMXc/XCILFgX9nzb5ftBTVebFdlEldsZ+uY3JSTf1YRL6E32SoXjB3Vriup1cN9hGBAnt4oM+G8XP2J4VXb/WbMsQ8HEtsCEm9qu8tUcH75MFmwJpRMrcNqaTUMKrRw1fQbhMG3udQGsuKOSbzfWzluBe+pdHR6Pbb/aBI4V3vqkyar+IMoC89BTOorMsQfHKvkkUOd50vvE74of4oUCotIRyx+ktAWDFgcZLLsPr3GlJNisSBYRR8hzvOWEvmtDKgdSYrATwXzT/mcJIkzpmgnECHP5nHQu8TS0i6hK0EHQYjKuZT/uRlcB2ZmaAnCB5Y7RkUuTSJtYcrz8icxHzr9xXhZw1DnVJPNFjPRPrJuegBRgUSlUv/OGZHNFHuwrUZYqVYutQXj21zHVy4SNZrPCgrGHxCFircg7+ScqiAlX4XZTDJbkv0xAf8x195/SvaTpp0yddMyYmvsOwGItsM9oIMjVJNMjNUQ9FnOWtEUzVi/InSXc6w0EOrbI//qZyC+SFtAo6AuWl22kX7Z/8Qxai4leEsX4c3yaBxP9V/9fQ9WFRQBSVCgoalFYqVqy+Iaf/lnJdYxKojz0JrcT4KXUyf9cPxRMXONKzdIi7VVAv6Wapp+yjOJ73QSEIu5iph6L4UjmAZmpkt5pbm1IMuxF0PZ9KTgZH6ZxAc2dbVSJWksHAPTVxqxm8kY3VeyYdBXtKxnYxLea8/HHQTMHKRWG4IP1BuvBQKnqmTbJhsJylVJ7pK8AYZxHkjDN9AoY7yWfmgAhQ+Kz+P8lamvmvq+h3o2dvGG9GbtBqv0PQtcdMYl88yN4IPFFNcnv8jgtl3bNiI8GUFTzRRq5Nz/+y5fKarw2ajzwyz2EHiEaaV0Or4PoXBEJFioluzyLHxkaiO+RPmzyt4LFNkxq86wd+f8AVGE4wOCxOx+5Omdt+T1J42JAq81RQm1KuU5bWCDY4y/KeeDL2DerL3lWwgkk+UZH2xTBVVIO/U11TjmJ+Sj4vNzrsDCu9kEIYAw==', 'page_age': None, 'title': 'San Diego, CA Weather Forecast | KGTV | kgtv.com', 'type': 'web_search_result', 'url': 'https://www.10news.com/weather'}, {'encrypted_content': 'EuEMCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDGdzJ3iE27i6eGMr3hoMs/GP0TxsbBZRzdBBIjD2Dwhzlu0wroKESTIfmj11US8pL4+8pEq4GQ0zIr83z2e35HR0DiQ/lR3XuNe73Jsq5Au8zMRqL8c2eob/KTt32pI9LbkkOtO5j5mfs2U8foamyw/SbsWzB1sZ5sCt8tHJBDtTBZOn5pFCbTc3hfNt5vT2aKH/YAwx3qvpDnOH/GOtcSn/+SBfccQmMt4a+hAYgXRH6PmV+W2Uq7rYVhb8leVkMv3n3tmUdyTrnAHYOdmA/i5muTAx38TcKxi/6gihAwIvFMewkK30esVlL/7y+YtqAGYJlNQUIE1+DlvFX41PLX+FRleh4PDF1oT4KIu6bR67trrH4mAL1OlH47V1Dgx46uOgrfLB+dSSsaBHsASN+sKqh77Q/ER0kB6TYaJLhEe5qxOc5m1HumLLMDFXjBJcBT2Awpk89lHosOhkEYYGEKAOxksVsCufFMaYaYYbJEShytzfF0dfg4mce7lg1Nm9HVxNGlgKhRzek/rt4+HxpXW6ei9gfOeR5T7BwgFe8AuNnmcZqy4iQIa72sQfAEBefpj4wWcivWL9NRDoW5X6lfB3lNiaj/UNDzbhj9TxsQCtMi66wHETH8STIxfifn+GbtzKE7fMrG/Q6yHga/3TUXmq0R7+iqr+xUWWRfWTjysBkZggd5uvHKlck/P5NnihwH3ROUROEvqVmIPKgg7MnY3cAu+/tyFn4iV0TbUTbA0A5OIzuX29eVDdBwvz+IfP4dHj0sAMYL+gqqzXSSJkzyDKXRX4IJuTICNvOc3J2Jbd5c40L+OCKVvTl+470eQEomGSe25mLqkWARtOD5Uyi+yO3hWV5wtELcqJ0C9CKHqXTYs+peB1/erRZXxaRxV+E8a56fRrK7tVgWF4oW4hTW4DJbBk1TnqjRCv1iGw++QOrBM4jlWwIN3k4tEF1OgKGIg3ffOUz695Z/JbiEdAxYG9+WuYQmq56JOVGGHrie6Ar3LJxHtOMcS5u2BaDpvHpxeUCWDAk9qBnJXH46ChPwPr2iBNsyET1FygBOum0ai8zjeVToswgS+lOit9am1BX1TbAqTcHLvpnqvPqG99Bue1+CvksCMWPS7/nzt8hr/iiyhfhDBvizYcrZlOnYmz1ziwbRug0vCWPdQEN0EgduSVsfByrC24vKLtw6XG4G2D2dz0rdKaRPnSBHvlCD1khqDGHpTpVqoFUfQjBlGNFbHOiBhNjk+6yq43P3gKnTgkA3vwGLlaEzDqN3IuoY6uCdqDTe06vN4939NIHCnSFP9gKQ9tQ6Bf9Qg81dRRi7qrP2Gqs3ls3acnSkHu/susJqPnT8j1o9qHR/3uWPeBFYs/QBn0gHOXfMd6uadKeul8EPJtRl0tVYUuNMSa878DK1rbTEnoQNCfz2PXN37JmrNJSdPJ1YURsIA9ICOuZ/O93Qiq11PhGYELwccW4BKpa+K+v5+qyAGkDfOgCqIBaJV25xpl4ec2b7jm9EiHsCf6+MTalSNaAzIOOeyxooos7XJwour/wJ8KDGffySbv5vFRAMUjyVoTE18gfgIBJZC/rhEDlDW8YFsr4amW48YGWD8iifQoRJEFTJ89mKD5PA/ThMyDOp1wNR7UNO7fs/eazmgc73i4x7wiRVOYFtr+W4GCMSkOzp/7IHM7qAT6cuHjKn6DPDDLkR+gjJno9Fqh8k/hyFaJR195FyqeV+ibwjjeYzcJ2WHYw3CehzXHo4C0bCfg8nR5Lb+aHrDNs9t5G9a2VponlIq9Jybc+KTwM0ieZo7yj7KWMtU4JGq4C5UMYjNiFQZqHcHSJeK1XFvd7Y7vgVllmjX9xkyGew053LK/TmZfjnEFqqbkwMwsybx4NcE2IHDlM/iwzJ1K0BYq8HDlvKoBvGGJK9tcDGEIm8aP8vzNBOC+rrdbuhj2i8S0MAc6/f6cEhOmEy89YWIOQ0hd67RmvnJg7sGkb0ikbZIFzAxZ3SVyb0iJ3fwbR5fY0mVkUCWF04tCu1UJtm5XenI1jDvuLuc+zb+LsByNpHGETSq1iuIlLPgNBia7fPcozHZ6/QFGs5JMUEH+sf1ODmPttJx1c8scmewmrC6aZDCpShgD', 'page_age': None, 'title': 'San Diego, CA 10-Day Weather Forecast | Weather Underground', 'type': 'web_search_result', 'url': 'https://www.wunderground.com/forecast/us/ca/san-diego'}, {'encrypted_content': 'Ep0JCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDI5xROhXHN/m8ES+UhoM6lrJqYUso9G4qtN1IjD/mljhPxfWpmaK2DscsItbKb3lowAqYOuGG1v0881FW7DwbYg+bk+rlwgHb9yGNF0qoAj3YEp4YWNw44dXHdP+NZe1W0oPmyFCOqmpwKypflGmXEGCmEolLVWsZsSeHdrQ3FYu8od+/5lz3AkLXna3oB8ixd7MojTwGUnAxihWAqZ9JX6TJBtPA8VZlR6zdBMn2BkFvwB4hCa7JuRQicgANc/yc+hbsJL9D+uHP9LtX+8jZrPWScbjJzH+KcE0P7dW8tIOcUMF7o332d6ZDpwgFRpyKsCDj9atosORhs2Uly7cBo6breWOOZuo2j9dsNO32RJPvZ+AdZ7lVc+pzJp0gOg9nK/BdUN8h4NoOoEk6lv2PIe3iz9UeNfaxu+KC7lhKpamuKmCvZWpg22FEkdy5NNj6P7Q9Y/zM3+wQqEJn+/EDDoMxxX/zG4RCMSUiLkDfycKFUcXumwdiWWXsHyiVtbJE+FWTsEc6AAoVGJJx8vGt6oM6M9V7WTpr0K0xc/Ff8Yg5Hk7MsS+cljGXB0lotZAE87gTCIP+5E9xYhvMCeq3qTFNLr+fBZDT0HfWZBklTLtuRNT0swS8nBTkqSC2RGXwoOXjJNNFc++1Uyl44W/volkT2fSlTkaX1dGcxi092Wx65K23N1sle8B55Prnq5VJCmwGPnCMOyuf7UW58A975T00O20Bi/Qn52IoRFzeNYEQJotvQUWA8e7sicKW+fUObO2gGcDV+i7fGrBliukJlaEKwynm5K2arF+sTEnFKeL5Aqz16R7qzyaXE3kAdzAscV7LHAgVF9NWeoCrERjP2wOrRa5ug9j7lTI4O7Bod/DW6ZG365OBLKySUoZDogSDjpF2r816+9h68KJ9muGVL1QywDzxV2nKSaduLch8sPRWnWahWATgTagkQcx8VbloG1CJAGgPi5BJj8wXLVQC3jJ9iPZF4Pt8BXhIfrVvdGKdM/irLnKOkU94JWwWrrwdEOYLLDfYX2Sd1wrDy09EUJln3hDwF9lrUFl6A8kMGgUs94lKxNrGBpz7SZlit0b4aWTLTJvkMBKU8lBua8jBYzdCEvUArgq1HotzvN3Gy3laxkZS7gNd8+nJtSRKQuvsF9CpzYahjl7Cpm2i3oBkMAX71gNGZK4oeAVkXIlDf3Izn+FmMUB13PAoE28+U8WKWeoBPGFCYW70C5GqNTPq5BQh+Enu5LTQb+HSFJRH6Ut7e2DsE+u+YOJJTVqj/raMrBy6B/eosuVRUK1v2K9Y83vk8BYOZAU9M1BakisedoIhRBjbyHr9vgjNw9Utx9T1Z1VU1oHBsuh4jieqtf7jbHDlyeBxpd3hihNH6bDYDTrrHdol6CyNnJMX9fYRsod9CSordqIE9LbgBXzK0CeQ7314z4LkiGqZalEp2eK9iMu3cQdDojsK6Pkh8tF5b7VJapg0GCVz9VelXmusHqkyqcaAdE57FBcGWccnJjx5LEYAw==', 'page_age': 'August 30, 2021', 'title': 'San Diego 7-Day Weather Forecast | FOX 5 San Diego', 'type': 'web_search_result', 'url': 'https://fox5sandiego.com/weather/forecast/'}, {'encrypted_content': 'EssBCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDGk3vA744zx+v4oiNRoM85NwcZ9LhpUVKKVhIjBhB90SwP7FQzc5prC4F74u6Nq7yjA6eTQ0eGz1C95tXykxkG83bde9zDBYYtOhyiMqT9/LRi1it6gLU50411m3kcakvvBZmIMXuV7yOXeBshzUZ/oc87CRmLDlNKERNLnGVBY+LcAYpa2aunvUqZrKDN3DNCu1tEK7YOco2PYPPEwYAw==', 'page_age': '3 weeks ago', 'title': 'San Diego, CA Weather Forecast, Conditions, and Maps – Yahoo Weather', 'type': 'web_search_result', 'url': 'https://weather.yahoo.com/us/ca/san-diego'}, {'encrypted_content': 'EvYECioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDDG5NHCcqRPX9SWVpxoMxrogTPbZi1oIocSMIjAWEJFjJzYM7JyMjedV8GwPcWLWLXs9cRjtZKaeFosB2+A/PIdghnwcDnv4LstJOx8q+QM37x9HeomyTG065YDUO3j2+gXyihQw8agn+E2WpfazE92edII2kWZHGzM4lltT/K2C5wwKU+XIunBclczx868rQojKp/P8MHMzxcAiJxlyfKP+3zrKjdrzUPChiEsuCZ8Vd8opb7Sl45JfxaANW/dPWjjb+1rLyr6PpT+BncLjgduIL/np37PYryEJCjCV83BW2WT06OXR730pEx83dQk2Kkz4MQW4PmNljcmEHixce3JUwYyIJJvTUFhC4wIwmAThOLJ7rHXC+QkxoiiMmgl2XSrKZ0aDgisNRjWqm4nGbuORWaRoanAzq0sSVLcy4aGbyNu9Ie4FFin2NlHXd8b3utwr42ZB5tpydzltFeNIQsKXJyaROFkVC6Uu1oPJT1LQ8HToY1nkLVcneMunKNbdjGZ/X50jDsXFL9OdxT3AG4gsOZhGdFCg6gCqJBaYN+8/jD3Cfrr01LK7oW/bFr4QqKnj1+N6XqJxX3Dj2hh8Cbq/BUpS0sBWoPN9vFqoMUThaoGVsNrkwscSH8w5yU3c2AunGK1IX64Cg+N7usE/43nCHzOFQZxX6tMgujbB+8CzHOt/gO8phlIbGmct+ilzYqRRjajrsYkDqmL0xSedE+d4HfGER2FvkKyVQ6F+og6JHllFCrQ1D4daQhM+HdLSbeDB7kATwM7bGAM=', 'page_age': None, 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}, {'encrypted_content': 'EtQVCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDL/C8QA8TyruLKc1yxoMXt6K+OiDpQPrDJKFIjCoTiYt/Exmyj3N5147Y2hAtJPWjdpBJFpJBTSnRla+YplTiDm4cWuB4piaNS68Lc0q1xSvGLnoNRGUjYzlm3w5lG7gf9rXu6nsezdKPeKaa3wUeZyJ79QpHzNRFZiTF+fqWvM1SR07Dcz9K8Xnv9mLItmhTOtulFxl1BuNbkdS6SW0HG3On4mCIv8rQhbmXomK2QFFO8MRQrMhJ8/4iG0LxdgY/l78bP80DWCVMfV1i4v5JCRHepRHYimXyGShJ0P95mIEoyN9lc00uqYAXAXBsmPh1R4qRKQ8YUkzEAL7vh7b6mbJCjAmwiNBZwOFS7x1DucAJPhhM2w+uNuPkxbEBv0qXZjaUxTiFRIPRMq3eHqSkPXGKckVMikbqCqQvHCoKo6mzy1sjcoMrf05hFXGkrodW4MVw4IHWHI4FZ6O4fZ3reex7V/94Bq+CPOeFbkm58SOtfIVQNn4nNp31gTy56EKW4sWsBo34kAsWFEQpT7ghB+uFhd9DNTdCB3ODMM6kPyYy+extxX005ZiqxOPcQ+5rlLx0uax6qemanWiJzRta7zlmGosFtTijkNmqIAmWP9Oq18YK2eyBC79TE1e4btMFtd4K64E1FWOW8hPFlfFxk0dzzuU/xOogkSJQMPl5yfjNSThk90tGI1lFUPMZ0mM2N7rSXsCUKBQq2d6KozL62K3Tet17BFM8MHGHsRvqPwKJ3FGkdNfwUE6+JItULtEIFZSHnqYKc4cjPqOR5xN4RGnO9R/n9pxJT8L4iwzsTqoOd9Ol6fxW5pXrdrdyQTNm1SK0NGbynpj+dMocxofzCHsjAGFTfLtJx0PgufBRfhKGkjDe+udyJjat8N780ZNJjHuQ9EwUO9VwF1GY5L7mHSLNtQWZ94s65VXooNvFMtUFnEAvWOaBe+r+yWUFp/VdST7GbYE1lNgkXCwQN0xOZivsPZDC9p/5TgvBmartz+gt80mAKqExuI818bjr4rXoybC/nq+xO5omL6Ut6wQcWw1B3pE+t3aP4BFGs07dXp5H3vZkqWgij3ZXht9L0zI70sO+eJiHk0qyjDXHGUUsN+ai6D9KmugX027Svr7VUw9FDwv5JY8NZKfGSxpIyHIGHh7Khuv2dZGuwGqbC/UXSu6EBjE0gvaVvdpym9yEjvBUWFzxdrymsPIgju9uAU+o2XQaAYG6KZWHD4qXNvicsVPUVhP6nM3DP21eHiwlXteugOTvkfCJVLTMjU4bBFIRmheD3QqNTSgV8NdbL5+lRLTWnUNsCXgcEUUe+uHn2jtzOB7vhmjbAo9om+iV9lMMn/3quHEZQvbAuLrcfIzvGlub66oUvIEgbrpjszPxDsip1anwtoX+9OB8+OQA3yFM5tTG/0UGomj+woRUs/Df3a9MMS8R0Oi7KnzErcXXBbVdl8yyRn/nylrlarhA89oUsLNnzMP3k8DV9rsbEy2Xm/cgKnv/f5zyolXviuHARDAmho/H6/rjqwYqHtB9KypQ5PQgU9iEB1P0t7BkrO3UCqFQ8J8xz8K7+JGis0ykMlsUpi7fo+gQzfYq+VOzpW9xbMR02ErKsLzyx4niVqBg08kpiZLKG+xmDrS+AZsrD9hAuEXMcXXXpDNwUNAPZ+p3t7TYsoqXLD47TBrCyKhcYC0DhcRtqpWkAPG9B2LcHfnqeX57ujlGaI5TXgTG3a897/3UxEe/IIgR4MZ97MGoturrTN1drjACcTEX5yi5Gig1gbjCbYBDrHLj0prAJswy8rr6SaWsGrY/77dwVYH9SvLIT1xn1BZ0O1fofVqe4JGoYsJch2PdDnyvjOD4kERtia3F69opT6ItpZE0Apl/fXW9pZe3bTfodeNUENo7zbHrMnEGguFABfHOfo6dLILIaQ4TgBq4TY8/ane65oS6X2S+4xr8w1nzk51+lp/R027eRMB5qCxBJA5VEDqhIQl8bHd14KAdtvlNAiRbxoAjZQk2c3KZn8gBLFObXTJbo3zb3A1r4Ypp+f47jCqoskVovgLCxdSQkmC2lm45sF0ikgtQ0B5AR1yrg4k8Kcu2T5RX5dTkSucVGqSK0O+Ntp+wlbPh1rKVErnq+/xw2RlXYx/+OrmUpKvIAJi01AMmyvqQKKva4tE/BgYSXjVE10s2XRfNiOHfQ1K+m5nSMhuYh4SBt6yJwlFXfmACDgnB3RuaXjMZ3/3qAN9K2OEWwCti1e70cLjE0dwzrhtbw4OGq1STSZ1Blws70v9L3WONZ5Lk/BvdmXBzttaCGzqnOxEjObhnfgMM2z+YOgj3xZgzM5ZShtso/VJPYPvkwTohvXWLFvq4gmu8ntmJaUwLWvVtA3+lGVDhsR7GV6ALPWSltb7T7vdLdHjcOQRYB6vYlldyVNlgSm6nYlA8oQemPPAXJtPXWQ4cYKe1Ck/ndWLqlA3vR9icZBVElEtxzB20fOge3SjkOF2DdA4b4gFzMoLVnCQxl8+/6dD0MWYnUBC7sKhnACAO8MoLohg7laNtIdCaLcM7AiuG1ZKP9Oc4BPxnp6FJ/AgwGf+WTPOnTFaG06iQFOMI/FGySdH6ekS8ClGiSCp2JVMl8KfQa8yLKog3JGcVh6OZprLJemeb/zjZiVeI05PKd5YDq28KOYnqozwvIIddJkdTUaGk/Q51hQA+0Km61xFLruxEM52Ri+j2dvrOKslbE1CFzEVaEYAYmNP6cpJERZHho1AA7Wr5PXjUsG0TlNLNdIFGJ+kg71saTtnIOuvmQ+A1piiZKeSu9E+Vu10ctWxdHjjrygGwvkpMxuihUFUYWyFjh2a5K1tbnl5lguB+gJnDrgOw5kSiFBvFPFX9q/i95yu/cyetTngXR69sR6vviXN97sj6vUuI7Hs4NlO7VN5X9j1Wq+sMj4Dt0290X0U85fCfGggWM6DbRjvjtKFRzwX6OvCGIEVawImR8B/SP2XcaCuMnb8pPky+Afk9hd7G0060HEld2LAFRZ1m832dn1YpBZejhbKdB/SVNy7xLFXdrqsAIhO2/H6OL/4Ept4qD9nLYuXwv7znX0WvgLS6Bx5p7/7c2IlLRqr68E+h40Cx8oPgA2+loKURbC0bJY+awAvYl8tFCGRSCDcwuDg7S4l3sH//DSR6wKVj5EUcx0T6DAjGWQO/jB/c0kPfj14Ix6GbrBfEsPY8JA7jENSikqAvwpBiI3R0gf2D/kEImoFyzuf8RWp8KzlX7uhrtWkrFnsI8ygRDd6m9Nryt174O6bA3Ag6PMRz+/FkOo5wZdAeN8ZGUVo8FOcXbAp66YcO3Y1QetFi8VREhUflMMfDg9Q7ARzy9ebjs2ORhOTfjDBZZIh8j4Deip5zOa6zFki8X/HyPKB9HJyGtS3gtlvMLtGCnNROhVWHC6mpNACdj0uBDles0p9X7JRe5ZJUZjyOC+B22wVrcjLibdH4qOhvhUf1rwpNILOhI+fkTMrc4VxN7dUBTIVz21LWnQ9R3Z65dJEja5DhuVJ2Jj/xwv1ZjJ9bg6AUPgc110je/neMCS0GWcMSyjb/yL8MZeCXvW7v2WJoLx2NkaIu/yJ1HR04RKAGmixrGbhqExT7zyhz7MF2rMIlb+SC0MqGXl8fhVUGAM=', 'page_age': None, 'title': 'San Diego, CA Hourly Weather | AccuWeather', 'type': 'web_search_result', 'url': 'https://www.accuweather.com/en/us/san-diego/92101/hourly-weather-forecast/347628'}], 'tool_use_id': 'srvtoolu_011k5YzwkJEnWpXu5mJX4rJZ', 'type': 'web_search_tool_result'}, {'citations': None, 'text': "Here's the current weather in San Diego:\n\n", 'type': 'text'}, {'citations': [{'cited_text': 'TomorrowTue 12/02 High · 65 °F · 12% Precip. / 0.00 °in Partly cloudy. High around 65F. Winds SW at 5 to 10 mph. ', 'encrypted_index': 'EpEBCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDDZgjgkjXHCrqYAldBoMQ+9FcsSyS4euYUsoIjDq1SaanvsNrJsb6GwMRZx9i22/um6EB+LGCDhQ35jnyuhjTatLWbeKNOdWZjJ28C8qFcykikT/XpGr4/xk0Lhu//sv/18NphgE', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': 'Today (Tuesday, December 2nd) will be partly cloudy with a high around 65°F and winds from the SW at 5 to 10 mph.', 'type': 'text'}, {'citations': None, 'text': ' ', 'type': 'text'}, {'citations': [{'cited_text': 'Tomorrow nightTue 12/02 Low · 54 °F · 24% Precip. / 0.00 °in Cloudy skies.', 'encrypted_index': 'EpEBCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDI0GrK/Q4CkRv2suKhoM5ZejhffWWuarY8TfIjC+sbGEJDZQcxVPt5ep5rgFODTqR02gRRUUhG9hx8Pk4BoJDyzA7jPjAsc0Dwy+dosqFQXjqForGo4hkZqTLThcO55QffVn3xgE', 'title': 'San Diego, CA Weather Conditions | Weather Underground', 'type': 'web_search_result_location', 'url': 'https://www.wunderground.com/weather/us/ca/san-diego'}], 'text': 'Tonight, expect cloudy skies with a low around 54°F.', 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Weather Alerts · , Enter zip code to change location · AK · AL · AR · AZ · CA · CO · CT · DC · DE · FL · GA · HI · IA · ID · IL · IN · KS · KY · LA · ...', 'encrypted_index': 'Eo8BCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDERf0VdX1avUFQiQwxoMwLqbOq/bbIGTd1BGIjAFrP+g3YcVMD6hV0srfjGOu0jVzVe2HIAHuR4KlRo7wxQ2W/NA7RkLrvfzk+0td+IqEwBwAXHDCpf0zR5l4breXyDtnkUYBA==', 'title': 'San Diego, CA Weather Forecast | KGTV | kgtv.com', 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather'}], 'text': "Current conditions show it's clear at 54°F with 90% humidity.", 'type': 'text'}, {'citations': None, 'text': '\n\n', 'type': 'text'}, {'citations': [{'cited_text': 'Weather Alerts · , Enter zip code ... WA · WI · WV · WY · 54° · Clear · feels like 54° · 65° / 54° · Monday · Partly Cloudy · -° / 50° · 5% Tuesday · ...', 'encrypted_index': 'EpEBCioIChgCIiQ4ODk4YTFkYy0yMTNkLTRhNmYtOTljYi03ZTBlNTUzZDc0NWISDDNv12zC1LGgpu5+PRoMfDZYojsfdG0r4+hPIjAa6GdWSs9va0UM/cpX6PtRPYbCuXM7SihUm4LLI9XnoS4dUvh/W3rlog4KysA8AsIqFcjxtfY2rzOlVbLre7Yz7xtexdrxthgE', 'title': 'San Diego, CA Weather Forecast | KGTV | kgtv.com', 'type': 'web_search_result_location', 'url': 'https://www.10news.com/weather'}], 'text': 'Looking ahead: Tuesday has a 32% chance of late showers, followed by partly cloudy conditions Wednesday through Thursday.', 'type': 'text'}] - model:
claude-opus-4-5-20251101 - role:
assistant - stop_reason:
end_turn - stop_sequence:
None - type:
message - usage:
{'cache_creation': {'ephemeral_1h_input_tokens': 0, 'ephemeral_5m_input_tokens': 0}, 'cache_creation_input_tokens': 8149, 'cache_read_input_tokens': 0, 'input_tokens': 2230, 'output_tokens': 250, 'server_tool_use': {'web_search_requests': 1}, 'service_tier': 'standard'}
Third party providers
NB: The 3rd party model list is currently out of date–PRs to fix that would be welcome!
Amazon Bedrock
These are Amazon’s current Claude models:
models_awsProvided boto3 is installed, we otherwise don’t need any extra code to support Amazon Bedrock – we just have to set up the approach client:
ab = AnthropicBedrock(
aws_access_key=os.environ['AWS_ACCESS_KEY'],
aws_secret_key=os.environ['AWS_SECRET_KEY'],
)
client = Client(models_aws[0], ab)chat = Chat(cli=client)
chat("I'm Jeremy")Google Vertex
models_googfrom anthropic import AnthropicVertex
import google.authproject_id = google.auth.default()[1]
region = "us-east5"
gv = AnthropicVertex(project_id=project_id, region=region)
client = Client(models_goog[-1], gv)chat = Chat(cli=client)
chat("I'm Jeremy")Footnotes
https://www.wunderground.com/weather/us/ca/san-diego “TomorrowTue 12/02 High · 65 °F · 12% Precip. / 0.00 °in Partly cloudy. High around 65F. Winds SW at 5 to 10 mph.”↩︎
https://www.wunderground.com/weather/us/ca/san-diego “Tomorrow nightTue 12/02 Low · 54 °F · 24% Precip. / 0.00 °in Cloudy skies.”↩︎
https://www.10news.com/weather “Weather Alerts · , Enter zip code to change location · AK · AL · AR · AZ · CA · CO · CT · DC · DE · FL · GA · HI · IA · ID · IL · IN · KS · KY · LA · …”↩︎
https://www.10news.com/weather “Weather Alerts · , Enter zip code … WA · WI · WV · WY · 54° · Clear · feels like 54° · 65° / 54° · Monday · Partly Cloudy · -° / 50° · 5% Tuesday · …”↩︎