semlib
Session
Bases: Sort
, Filter
, Extrema
, Find
, Apply
, Reduce
A session provides a context for performing Semlib operations.
Sessions provide defaults (e.g., default model), manage caching, implement concurrency control, and track costs.
All of a Session's methods have analogous standalone functions (e.g., the Session.map method has an analogous map function). It is recommended to use a Session for any non-trivial use of the semlib library.
Source code in src/semlib/session.py
model
property
model: str
Get the current model being used for completions.
Returns:
Type | Description |
---|---|
str
|
The model name as a string. |
__init__
__init__(
*,
model: str | None = None,
max_concurrency: int | None = None,
cache: QueryCache | None = None,
)
Initialize.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str | None
|
The language model to use for completions. If not specified, uses the value from the
|
None
|
max_concurrency
|
int | None
|
Maximum number of concurrent API requests. If not specified, uses the value from the
|
None
|
cache
|
QueryCache | None
|
If provided, this is used to cache LLM responses to avoid redundant API calls. |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
If |
Source code in src/semlib/_internal/base.py
apply
async
apply[T, U: BaseModel](
item: T,
/,
template: str | Callable[[T], str],
*,
return_type: type[U],
model: str | None = None,
) -> U
Apply a language model prompt to a single item.
This method formats a prompt template with the given item, sends it to the language model, and returns the response. The response can be returned as a raw string, parsed into a Pydantic model, or extracted as a bare value using the Bare marker.
This method is a simple wrapper around prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
item
|
T
|
The item to apply the |
required |
template
|
str | Callable[[T], str]
|
A template to format with the item. This can be either a string template with a single positional placeholder, or a callable that takes the item and returns a formatted string. |
required |
return_type
|
type[U] | Bare[V] | None
|
If not specified, the response is returned as a raw string. If a Pydantic model class is provided, the response is parsed into an instance of that model. If a Bare instance is provided, a single value of the specified type is extracted from the response. |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
U | V | str
|
The language model's response in the format specified by return_type. |
Raises:
Type | Description |
---|---|
ValidationError
|
If |
Examples:
Basic usage:
>>> await session.apply(
... [1, 2, 3, 4, 5],
... template="What is the sum of these numbers: {}?",
... return_type=Bare(int),
... )
15
Source code in src/semlib/apply.py
clear_cache
compare
async
compare[T](
a: T,
b: T,
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T, T], str] | None = None,
task: Task | str | None = None,
model: str | None = None,
) -> Order
Compare two items.
This method uses a language model to compare two items and determine the relative ordering of the two items. The comparison can be customized by specifying either a criteria to compare by, or a custom prompt template. The comparison task can be framed in a number of ways (choosing the greater item, lesser item, or the ordering).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
a
|
T
|
The first item to compare. |
required |
b
|
T
|
The second item to compare. |
required |
by
|
str | None
|
A criteria specifying what aspect to compare by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T, T], str] | None
|
A custom prompt template for the comparison. Must be either a string template with two positional
placeholders, or a callable that takes two items and returns a formatted string. If this is provided,
|
None
|
task
|
Task | str | None
|
The type of comparison task that is being performed in |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
Order
|
The ordering of the two items. |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing the LLM response fails. |
Examples:
Basic comparison:
Custom criteria:
>>> await session.compare("California condor", "Bald eagle", by="wingspan")
<Order.GREATER: 'greater'>
Custom template and task:
>>> await session.compare(
... "proton",
... "electron",
... template="Which is smaller, (A) {} or (B) {}?",
... task=Task.CHOOSE_LESSER,
... )
<Order.GREATER: 'greater'>
Source code in src/semlib/compare.py
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 |
|
filter
async
filter[T](
iterable: Iterable[T],
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T], str] | None = None,
negate: bool = False,
model: str | None = None,
) -> list[T]
Filter an iterable based on a criteria.
This method is analogous to Python's built-in
filter
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to filter. |
required |
by
|
str | None
|
A criteria specifying a predicate to filter by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T], str] | None
|
A custom prompt template for predicates. Must be either a string template with a single positional
placeholder, or a callable that takes an item and returns a formatted string. If this is provided, |
None
|
negate
|
bool
|
If |
False
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
list[T]
|
A new list containing items from the iterable if they match the criteria. |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing any LLM response fails. |
Examples:
Basic filter:
>>> await session.filter(["Tom Hanks", "Tom Cruise", "Tom Brady"], by="actor?")
['Tom Hanks', 'Tom Cruise']
Custom template:
>>> await session.filter(
... [(123, 321), (384, 483), (134, 431)],
... template=lambda pair: f"Is {pair[0]} backwards {pair[1]}?",
... negate=True,
... )
[(384, 483)]
Source code in src/semlib/filter.py
find
async
find[T](
iterable: Iterable[T],
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T], str] | None = None,
negate: bool = False,
model: str | None = None,
) -> T | None
Find an item in an iterable based on a criteria.
This method searches through the provided iterable and returns some item (not necessarily the first) that matches the specified criteria.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to search. |
required |
by
|
str | None
|
A criteria specifying a predicate to search by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T], str] | None
|
A custom prompt template for predicates. Must be either a string template with a single positional
placeholder, or a callable that takes an item and returns a formatted string. If this is provided, |
None
|
negate
|
bool
|
If |
False
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
T | None
|
An item from the iterable if it matches the criteria, or |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing any LLM response fails. |
Examples:
Basic find:
>>> await session.find(["Tom Hanks", "Tom Cruise", "Tom Brady"], by="actor?")
'Tom Cruise' # nondeterministic, could also return "Tom Hanks"
Custom template:
>>> await session.find(
... [(123, 321), (384, 483), (134, 431)],
... template=lambda pair: f"Is {pair[0]} backwards {pair[1]}?",
... negate=True,
... )
(384, 483)
Source code in src/semlib/find.py
map
async
map[T, U: BaseModel](
iterable: Iterable[T],
/,
template: str | Callable[[T], str],
*,
return_type: type[U],
model: str | None = None,
) -> list[U]
Map a prompt template over an iterable and get responses from the language model.
This method applies a prompt template to each item in the provided iterable, sends the resulting prompts to the language model, and collects the responses. The responses can be returned as raw strings, parsed into Pydantic models, or extracted as bare values using the Bare marker.
This method is analogous to Python's built-in map
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to map over. |
required |
template
|
str | Callable[[T], str]
|
A prompt template to apply to each item. This can be either a string template with a single positional placeholder, or a callable that takes an item and returns a formatted string. |
required |
return_type
|
type[U] | Bare[V] | None
|
If not specified, the responses are returned as raw strings. If a Pydantic model class is provided, the responses are parsed into instances of that model. If a Bare instance is provided, single values of the specified type are extracted from the responses. |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
list[U] | list[V] | list[str]
|
A list of responses from the language model in the format specified by return_type. |
Raises:
Type | Description |
---|---|
ValidationError
|
If |
Examples:
Basic map:
>>> await session.map(
... ["apple", "banana", "kiwi"],
... template="What color is {}? Reply in a single word.",
... )
['Red.', 'Yellow.', 'Green.']
Map with structured return type:
>>> class Person(pydantic.BaseModel):
... name: str
... age: int
>>> await session.map(
... ["Barack Obama", "Angela Merkel"],
... template="Who is {}?",
... return_type=Person,
... )
[Person(name='Barack Obama', age=62), Person(name='Angela Merkel', age=69)]
Map with bare return type:
>>> await session.map(
... [42, 1337, 2025],
... template="What are the unique prime factors of {}?",
... return_type=Bare(list[int]),
... )
[[2, 3, 7], [7, 191], [3, 5]]
Source code in src/semlib/map.py
max
async
max[T](
iterable: Iterable[T],
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T, T], str] | None = None,
task: Task | str | None = None,
model: str | None = None,
) -> T
Get the largest item in an iterable.
This method finds the largest item in a collection by using a language model to perform pairwise comparisons.
This method is analogous to Python's built-in max
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to search. |
required |
by
|
str | None
|
A criteria specifying what aspect to compare by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T, T], str] | None
|
A custom prompt template for comparisons. Must be either a string template with two positional
placeholders, or a callable that takes two items and returns a formatted string. If this is provided,
|
None
|
task
|
Task | str | None
|
The type of comparison task that is being performed in |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
T
|
The largest item. |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing any LLM response fails. |
Examples:
Basic usage:
>>> await session.max(
... ["LeBron James", "Kobe Bryant", "Magic Johnson"], by="assists"
... )
'Magic Johnson'
Source code in src/semlib/extrema.py
min
async
min[T](
iterable: Iterable[T],
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T, T], str] | None = None,
task: Task | str | None = None,
model: str | None = None,
) -> T
Get the smallest item in an iterable.
This method finds the smallest item in a collection by using a language model to perform pairwise comparisons.
This method is analogous to Python's built-in min
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to search. |
required |
by
|
str | None
|
A criteria specifying what aspect to compare by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T, T], str] | None
|
A custom prompt template for comparisons. Must be either a string template with two positional
placeholders, or a callable that takes two items and returns a formatted string. If this is provided,
|
None
|
task
|
Task | str | None
|
The type of comparison task that is being performed in |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
T
|
The smallest item. |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing any LLM response fails. |
Examples:
Basic usage:
Source code in src/semlib/extrema.py
prompt
async
Send a prompt to the language model and get a response.
This method sends a single user message to the language model and returns the response. The response can be returned as a raw string, parsed into a Pydantic model, or extracted as a bare value using the Bare marker.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
str
|
The text prompt to send to the language model. |
required |
return_type
|
type[T] | Bare[U] | None
|
If not specified, the response is returned as a raw string. If a Pydantic model class is provided, the response is parsed into an instance of that model. If a Bare instance is provided, a single value of the specified type is extracted from the response. |
None
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
str | T | U
|
The language model's response in the format specified by return_type. |
Raises:
Type | Description |
---|---|
ValidationError
|
If |
Examples:
Get raw string response:
Get structured value:
>>> class Person(BaseModel):
... name: str
... age: int
>>> await session.prompt("Who is Barack Obama?", return_type=Person)
Person(name='Barack Obama', age=62)
Get bare value:
Source code in src/semlib/_internal/base.py
reduce
async
reduce(
iterable: Iterable[str],
/,
template: str | Callable[[str, str], str],
*,
associative: bool = False,
model: str | None = None,
) -> str
reduce[T](
iterable: Iterable[str | T],
/,
template: str | Callable[[str | T, str | T], str],
*,
associative: bool = False,
model: str | None = None,
) -> str | T
reduce[T: BaseModel](
iterable: Iterable[T],
/,
template: str | Callable[[T, T], str],
*,
return_type: type[T],
associative: bool = False,
model: str | None = None,
) -> T
reduce[T](
iterable: Iterable[T],
/,
template: str | Callable[[T, T], str],
*,
return_type: Bare[T],
associative: bool = False,
model: str | None = None,
) -> T
Reduce an iterable to a single value using a language model.
This method is analogous to Python's
functools.reduce
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[Any]
|
The collection of items to reduce. |
required |
template
|
str | Callable[[Any, Any], str]
|
A prompt template to apply to each item. This can be either a string template with two positional placeholders (with the first placeholder being the accumulator and the second placeholder being an item), or a callable that takes an accumulator and an item and returns a formatted string. |
required |
initial
|
Any
|
If provided, this value is placed before the items of the iterable in the calculation, and serves as a default when the iterable is empty. |
None
|
return_type
|
Any
|
The return type is also the type of the accumulator. If not specified, the responses are returned as raw strings. If a Pydantic model class is provided, the responses are parsed into instances of that model. If a Bare instance is provided, single values of the specified type are extracted from the responses. |
None
|
associative
|
bool
|
If |
False
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
Any
|
The final accumulated value. |
Raises:
Type | Description |
---|---|
ValidationError
|
If |
Examples:
Basic reduce:
>>> await session.reduce(
... ["one", "three", "seven", "twelve"], "{} + {} = ?", return_type=Bare(int)
... )
23
Reduce with initial value:
>>> await session.reduce(
... range(20),
... template=lambda acc, n: f"If {n} is prime, append it to this list: {acc}.",
... initial=[],
... return_type=Bare(list[int]),
... model="openai/o4-mini",
... )
[2, 3, 5, 7, 11, 13, 17, 19]
Associative reduce:
>>> await session.reduce(
... [[i] for i in range(20)],
... template=lambda acc,
... n: f"Compute the union of these two sets, and then remove any non-prime numbers: {acc} and {n}. Return the result as a list.",
... return_type=Bare(list[int]),
... associative=True,
... model="openai/o4-mini",
... )
[2, 3, 5, 7, 11, 13, 17, 19]
Distinguishing between leaf nodes and internal nodes in an associative reduce with Box:
>>> reviews: list[str] = [
... "The instructions are a bit confusing. It took me a while to figure out how to use it.",
... "It's so loud!",
... "I regret buying this microwave. It's the worst appliance I've ever owned.",
... "This microwave is great! It heats up food quickly and evenly.",
... "This microwave is a waste of money. It doesn't work at all.",
... "I hate the design of this microwave. It looks cheap and ugly.",
... "The turntable is a bit small, so I can't fit larger plates in it.",
... "The microwave is a bit noisy when it's running.",
... "The microwave is a bit expensive compared to other models with similar features.",
... "The turntable is useless, so I can't fit any plates in it.",
... "I love the sleek design of this microwave. It looks great in my kitchen.",
... ]
>>> def template(a: str | Box[str], b: str | Box[str]) -> str:
... # leaf nodes (raw reviews)
... if isinstance(a, Box) and isinstance(b, Box):
... return f'''
... Consider the following two product reviews, and return a bulleted list
... summarizing any actionable product improvements that could be made based on
... the reviews. If there are no actionable product improvements, return an empty
... string.
...
... - Review 1: {a.value}
... - Review 2: {b.value}'''
... # summaries of reviews
... if not isinstance(a, Box) and not isinstance(b, Box):
... return f'''
... Consider the following two lists of ideas for product improvements, and
... combine them while de-duplicating similar ideas. If there are no ideas, return
... an empty string.
...
... # List 1:
... {a}
...
... # List 2:
... {b}'''
... # one is a summary, the other is a raw review
... if isinstance(a, Box) and not isinstance(b, Box):
... ideas = b
... review = a.value
... if not isinstance(a, Box) and isinstance(b, Box):
... ideas = a
... review = b.value
... return f'''
... Consider the following list of ideas for product improvements, and a product
... review. Update the list of ideas based on the review, de-duplicating similar
... ideas. If there are no ideas, return an empty string.
...
... # List of ideas:
... {ideas}
...
... # Review:
... {review}'''
>>> result = await session.reduce(
... map(Box, reviews), template=template, associative=True
... )
>>> print(result)
- Clarify and simplify the product instructions to make them easier to understand.
- Consider reducing the noise level of the product to make it quieter during operation.
- Improve product reliability to ensure the microwave functions correctly for all users.
- Increase the size or adjust the design of the turntable to accommodate larger plates.
- Improve the design to enhance the aesthetic appeal and make it look more premium.
Source code in src/semlib/reduce.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
|
sort
async
sort[T](
iterable: Iterable[T],
/,
*,
by: str | None = None,
to_str: Callable[[T], str] | None = None,
template: str | Callable[[T, T], str] | None = None,
task: Task | str | None = None,
algorithm: Algorithm | None = None,
reverse: bool = False,
model: str | None = None,
) -> list[T]
Sort an iterable.
This method sorts a collection of items by using a language model to perform pairwise comparisons. The sorting algorithm determines which comparisons to make and how to aggregate the results into a final ranking.
This method is analogous to Python's built-in
sorted
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
iterable
|
Iterable[T]
|
The collection of items to sort. |
required |
by
|
str | None
|
A criteria specifying what aspect to compare by. If this is provided, |
None
|
to_str
|
Callable[[T], str] | None
|
If specified, used to convert items to string representation. Otherewise, uses |
None
|
template
|
str | Callable[[T, T], str] | None
|
A custom prompt template for comparisons. Must be either a string template with two positional
placeholders, or a callable that takes two items and returns a formatted string. If this is provided,
|
None
|
task
|
Task | str | None
|
The type of comparison task that is being performed in |
None
|
algorithm
|
Algorithm | None
|
The sorting algorithm to use. If not specified, defaults to BordaCount. Different algorithms make different tradeoffs between accuracy, latency, and cost. See the documentation for each algorithm for details. |
None
|
reverse
|
bool
|
If |
False
|
model
|
str | None
|
If specified, overrides the default model for this call. |
None
|
Returns:
Type | Description |
---|---|
list[T]
|
A new list containing all items from the iterable in sort order. |
Raises:
Type | Description |
---|---|
ValidationError
|
If parsing any LLM response fails. |
Examples:
Basic sort:
>>> await session.sort(["blue", "red", "green"], by="wavelength", reverse=True)
['red', 'green', 'blue']
Custom template and task:
>>> from dataclasses import dataclass
>>> @dataclass
... class Person:
... name: str
... job: str
>>> people = [
... Person(name="Barack Obama", job="President of the United States"),
... Person(name="Dalai Lama", job="Spiritual Leader of Tibet"),
... Person(name="Sundar Pichai", job="CEO of Google"),
... ]
>>> await session.sort(
... people,
... template=lambda a, b: f"Which job earns more, (a) {a.job} or (b) {b.job}?",
... )
[
Person(name='Dalai Lama', job='Spiritual Leader of Tibet'),
Person(name='Barack Obama', job='President of the United States'),
Person(name='Sundar Pichai', job='CEO of Google'),
]
Source code in src/semlib/sort/sort.py
QueryCache
Bases: ABC
Abstract base class for a cache of LLM query results.
Caches can be used with Session to avoid repeating identical queries to the LLM.
Source code in src/semlib/cache.py
OnDiskCache
Bases: QueryCache
A persistent on-disk cache of LLM query results, backed by SQLite.
Source code in src/semlib/cache.py
__init__
__init__(path: str) -> None
Initialize an on-disk cache.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
str
|
Path to the SQLite database file. If the file does not exist, it will be created. By convention, the filename should have a ".db" or ".sqlite" extension. |
required |
Source code in src/semlib/cache.py
InMemoryCache
Bases: QueryCache
An in-memory cache of LLM query results.
Source code in src/semlib/cache.py
Bare
A marker to indicate that a function should return a bare value of type T
.
This can be passed to the return_type
parameter of functions like prompt. For
situations where you want to extract a single value of a given base type (like int
or list[float]
), this is more
convenient than the alternative of defining a Pydantic model with a single field for the purpose of extracting that
value.
Examples:
Extract a bare value using prompt:
Influence model output using class_name
and field_name
:
>>> await session.prompt(
... "Give me a list",
... return_type=Bare(
... list[int], class_name="list_of_three_values", field_name="primes"
... ),
... )
[3, 7, 11]
Source code in src/semlib/bare.py
__init__
Initialize a Bare instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
typ
|
type[T]
|
The type of the bare value to extract. |
required |
class_name
|
str | None
|
Name for a dynamically created Pydantic model class. If not provided, defaults to
the name of |
None
|
field_name
|
str | None
|
Name for the field in the dynamically created Pydantic model that holds the bare value. If not provided, defaults to "value". This name is visible to the LLM and may affect model output. |
None
|
Source code in src/semlib/bare.py
Box
A container that holds a value of type T
.
This can be used to tag values, so that they can be distinguished from other values of the same underlying type. Such a tag can be useful in the context of methods like reduce, where you can use this marker to distinguish leaf nodes from internal nodes in an associative reduce.
Source code in src/semlib/box.py
value
property
Get the value contained in the Box.
Returns:
Type | Description |
---|---|
T
|
The value contained in the Box. |
__init__
Initialize a Box instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
value
|
T
|
The value to be contained in the Box. |
required |
:::