日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

Jittor框架API

發布時間:2023/11/28 生活经验 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Jittor框架API 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Jittor框架API
這里是Jittor主模塊的API文檔,可以通過import jittor來獲取該模塊。
classjittor.ExitHooks
exc_handler(exc_type, exc, *args)
exit(code=0)
hook()
classjittor.Function(*args, **kw)
Function Module for customized backward operations
Example 1 (Function can have multiple input and multiple output, and user can store value for backward computation):
import jittor as jt
from jittor import Function

class MyFunc(Function):
def execute(self, x, y):
self.x = x
self.y = y
return x*y, x/y

def grad(self, grad0, grad1):return grad0 * self.y, grad1 * self.x

a = jt.array(3.0)
b = jt.array(4.0)
func = MyFunc()
c,d = func(a, b)
da, db = jt.grad(c+d*3, [a, b])
assert da.data == 4
assert db.data == 9
Example 2(Function can return None for no gradiant, and gradiant can also be None):
import jittor as jt
from jittor import Function

class MyFunc(Function):
def execute(self, x, y):
self.x = x
self.y = y
return x*y, x/y

def grad(self, grad0, grad1):assert grad1 is Nonereturn grad0 * self.y, None

a = jt.array(3.0)
b = jt.array(4.0)
func = MyFunc()
c,d = func(a, b)
d.stop_grad()
da, db = jt.grad(c+d*3, [a, b])
assert da.data == 4
assert db.data == 0
classmethodapply(*args, **kw)
dfs(parents, k, callback, callback_leave=None)
classjittor.Module(*args, **kw)
apply(func)
children()
dfs(parents, k, callback, callback_leave=None)
eval()
execute(*args, **kw)
extra_repr()
is_training()
load(path)
load_parameters(params)
load_state_dict(params)
modules()
mpi_param_broadcast(root=0)
named_modules()
named_parameters()
parameters()
register_forward_hook(func)
register_pre_forward_hook(func)
save(path)
state_dict()
train()
jittor.argmax(x, dim, keepdims: jittor_core.ops.bool = False)
jittor.argmin(x, dim, keepdims: jittor_core.ops.bool = False)
jittor.array(data, dtype=None)
jittor.attrs(var)
jittor.clamp(x, min_v=None, max_v=None)
jittor.clean()
jittor.detach(x)
jittor.dirty_fix_pytorch_runtime_error()
This funtion should be called before pytorch.
Example:
import jittor as jt
jt.dirty_fix_pytorch_runtime_error()
import torch
jittor.display_memory_info()
jittor.fetch(*args)
Async fetch vars with function closure.
Example 1:
for img,label in enumerate(your_dataset):
pred = your_model(img)
loss = critic(pred, label)
acc = accuracy(pred, label)
jt.fetch(acc, loss,
lambda acc, loss:
print(f"loss:{loss} acc:{acc}"
)
Example 2:
for i,(img,label) in enumerate(your_dataset):
pred = your_model(img)
loss = critic(pred, label)
acc = accuracy(pred, label)
# variable i will be bind into function closure
jt.fetch(i, acc, loss,
lambda i, acc, loss:
print(f"#{i}, loss:{loss} acc:{acc}"
)
classjittor.flag_scope(**jt_flags)
jittor.flatten(input, start_dim=0, end_dim=-1)
flatten dimentions by reshape
jittor.format(v, spec)
jittor.full(shape, val, dtype=‘float32’)
jittor.full_like(x, val)
jittor.get_len(var)
jittor.grad(loss, targets)
jittor.jittor_exit()
jittor.liveness_info()
jittor.load(path)
classjittor.log_capture_scope(**jt_flags)
log capture scope
example:
with jt.log_capture_scope(log_v=0) as logs:
LOG.v("…")
print(logs)
jittor.make_module(func, exec_n_args=1)
jittor.masked_fill(x, mask, value)
classjittor.no_grad(**jt_flags)
no_grad scope, all variable created inside this scope will stop grad.
Example:
import jittor as jt

with jt.no_grad():

jittor.norm(x, k, dim)
jittor.normal(mean, std, size=None, dtype=‘float32’)
jittor.ones(shape, dtype=‘float32’)
jittor.ones_like(x)
jittor.permute(x, dim)
Declaration: VarHolder
transpose(VarHolder* x, NanoVector axes=NanoVector())
jittor.pow(x, y)
classjittor.profile_scope(warmup=0, rerun=0, jt_flags)
profile scope
example:
with jt.profile_scope() as report:

print(report)
jittor.rand(size, dtype=‘float32’, requires_grad=False)
jittor.randn(size, dtype=‘float32’, requires_grad=False)
jittor.reshape(x, shape)
Declaration: VarHolder
reshape(VarHolder
x, NanoVector shape)
jittor.safepickle(obj, path)
jittor.safeunpickle(path)
jittor.save(params_dict, path)
jittor.single_process_scope(rank=0)
Code in this scope will only be executed by single process.
All the mpi code inside this scope will have not affect. mpi.world_rank() and mpi.local_rank() will return 0, world_size() will return 1,
example:
@jt.single_process_scope(rank=0)
def xxx():

jittor.size(v, dim=None)
jittor.sqr(x)
jittor.squeeze(x, dim)
jittor.start_grad(x)
jittor.std(x)
jittor.to_bool(v)
jittor.to_float(v)
jittor.to_int(v)
jittor.transpose(x, dim)
Declaration: VarHolder
transpose(VarHolder
x, NanoVector axes=NanoVector())
jittor.type_as(a, b)
jittor.unsqueeze(x, dim)
jittor.view(x, shape)
Declaration: VarHolder
reshape(VarHolder
x, NanoVector shape)
jittor.vtos(v)
jittor.zeros(shape, dtype=‘float32’)
jittor.zeros_like(x)
jittor.core
以下為Jittor的內核API,內核API可以通過jittor.core.XXX或者jittor.XXX直接訪問。
classjittor_core.DumpGraphs
inputs
Declaration: vector<vector> inputs;
nodes_info
Declaration: vector nodes_info;
outputs
Declaration: vector<vector> outputs;
classjittor_core.MemInfo
total_cpu_ram
Declaration: int64 total_cpu_ram;
total_cuda_ram
Declaration: int64 total_cuda_ram;
classjittor_core.NanoString
classjittor_core.NanoVector
append()
Declaration: inline void push_back_check_overflow(int64 v)
classjittor_core.RingBuffer
clear()
Declaration: inline void clear()
is_stop()
Declaration: inline bool is_stop()
keep_numpy_array()
Declaration: inline void keep_numpy_array(bool keep)
pop()
Declaration: PyObject
pop()
push()
Declaration: void push(PyObject* obj)
recv()
Declaration: PyObject* pop()
send()
Declaration: void push(PyObject* obj)
stop()
Declaration: inline void stop()
total_pop()
Declaration: inline uint64 total_pop()
total_push()
Declaration: inline uint64 total_push()
jittor_core.Var
jittor_core.jittor_core.Var 的別名
jittor_core.cleanup()
Declaration: void cleanup()
jittor_core.clear_trace_data()
Declaration: void clear_trace_data()
jittor_core.display_memory_info()
Declaration: void display_memory_info(const char* fileline=””, bool dump_var=false, bool red_color=false)
jittor_core.dump_all_graphs()
Declaration: DumpGraphs dump_all_graphs()
jittor_core.dump_trace_data()
Declaration: PyObject* dump_trace_data()
jittor_core.fetch_sync()
Declaration: vector fetch_sync(const vector<VarHolder*>& vh)
classjittor_core.flags
addr2line_path
Document:
addr2line_path(type:string, default:””): Path of addr2line.
Declaration: string _get_addr2line_path()
cache_path
Document:
cache_path(type:string, default:””): Cache path of jittor
Declaration: string _get_cache_path()
cc_flags
Document:
cc_flags(type:string, default:””): Flags of C++ compiler
Declaration: string _get_cc_flags()
cc_path
Document:
cc_path(type:string, default:””): Path of C++ compiler
Declaration: string _get_cc_path()
cc_type
Document:
cc_type(type:string, default:””): Type of C++ compiler(clang, icc, g++)
Declaration: string _get_cc_type()
check_graph
Document:
check_graph(type:int, default:0): Unify graph sanity check.
Declaration: int _get_check_graph()
compile_options
Document:
compile_options(type:fast_shared_ptr<loop_options_t>, default:{}): Override the default loop transfrom options
Declaration: fast_shared_ptr<loop_options_t> _get_compile_options()
cuda_archs
Document:
cuda_archs(type:vector, default:{}): Cuda arch
Declaration: vector _get_cuda_archs()
enable_tuner
Document:
enable_tuner(type:int, default:1): Enable tuner.
Declaration: int _get_enable_tuner()
exclude_pass
Document:
exclude_pass(type:string, default:””): Don’t run certian pass.
Declaration: string _get_exclude_pass()
extra_gdb_cmd
Document:
extra_gdb_cmd(type:string, default:””): Extra command pass to GDB, seperate by(😉 .
Declaration: string _get_extra_gdb_cmd()
gdb_attach
Document:
gdb_attach(type:int, default:0): gdb attach self process.
Declaration: int _get_gdb_attach()
gdb_path
Document:
gdb_path(type:string, default:””): Path of GDB.
Declaration: string _get_gdb_path()
has_pybt
Document:
has_pybt(type:int, default:0): GDB has pybt or not.
Declaration: int _get_has_pybt()
jit_search_kernel
Document:
jit_search_kernel(type:int, default:0): Jit search for the fastest kernel.
Declaration: int _get_jit_search_kernel()
jit_search_rerun
Document:
jit_search_rerun(type:int, default:10):
Declaration: int _get_jit_search_rerun()
jit_search_warmup
Document:
jit_search_warmup(type:int, default:2):
Declaration: int _get_jit_search_warmup()
jittor_path
Document:
jittor_path(type:string, default:””): Source path of jittor
Declaration: string _get_jittor_path()
l1_cache_size
Document:
l1_cache_size(type:int, default:32768): size of level 1 cache (byte)
Declaration: int _get_l1_cache_size()
lazy_execution
Document:
lazy_execution(type:int, default:1): Default enabled, if disable, use immediately eager execution rather than lazy execution, This flag makes error message and traceback infomation better. But this flag will raise memory consumption and lower the performance.
Declaration: int _get_lazy_execution()
log_silent
Document:
log_silent(type:int, default:0): The log will be completely silent.
Declaration: int _get_log_silent()
log_sync
Document:
log_sync(type:int, default:0): Set log printed synchronously.
Declaration: int _get_log_sync()
log_v
Document:
log_v(type:int, default:0): Verbose level of logging
Declaration: int _get_log_v()
log_vprefix
Document:
log_vprefix(type:string, default:””): Verbose level of logging prefix
example: log_vprefix=’op=1,node=2,executor.cc:38$=1000’ Declaration: string _get_log_vprefix()
no_grad
Document:
no_grad(type:bool, default:0): No grad for all jittor Var creation
Declaration: bool _get_no_grad()
nvcc_flags
Document:
nvcc_flags(type:string, default:””): Flags of CUDA C++ compiler
Declaration: string _get_nvcc_flags()
nvcc_path
Document:
nvcc_path(type:string, default:””): Path of CUDA C++ compiler
Declaration: string _get_nvcc_path()
profiler_enable
Document:
profiler_enable(type:int, default:0): Enable profiler.
Declaration: int _get_profiler_enable()
profiler_hide_relay
Document:
profiler_hide_relay(type:int, default:0): Profiler hide relayed op.
Declaration: int _get_profiler_hide_relay()
profiler_rerun
Document:
profiler_rerun(type:int, default:0): Profiler rerun.
Declaration: int _get_profiler_rerun()
profiler_warmup
Document:
profiler_warmup(type:int, default:0): Profiler warmup.
Declaration: int _get_profiler_warmup()
python_path
Document:
python_path(type:string, default:””): Path of python interpreter
Declaration: string _get_python_path()
rewrite_op
Document:
rewrite_op(type:int, default:1): Rewrite source file of jit operator or not
Declaration: int _get_rewrite_op()
stat_allocator_total_alloc_byte
Document:
stat_allocator_total_alloc_byte(type:size_t, default:0): Total alloc byte
Declaration: size_t _get_stat_allocator_total_alloc_byte()
stat_allocator_total_alloc_call
Document:
stat_allocator_total_alloc_call(type:size_t, default:0): Number of alloc function call
Declaration: size_t _get_stat_allocator_total_alloc_call()
stat_allocator_total_free_byte
Document:
stat_allocator_total_free_byte(type:size_t, default:0): Total alloc byte
Declaration: size_t _get_stat_allocator_total_free_byte()
stat_allocator_total_free_call
Document:
stat_allocator_total_free_call(type:size_t, default:0): Number of alloc function call
Declaration: size_t _get_stat_allocator_total_free_call()
trace_depth
Document:
trace_depth(type:int, default:10): trace depth for GDB.
Declaration: int _get_trace_depth()
trace_py_var
Document:
trace_py_var(type:int, default:0): Trace py stack max depth for debug.
Declaration: int _get_trace_py_var()
try_use_32bit_index
Document:
try_use_32bit_index(type:int, default:0): If not overflow, try to use 32 bit type as index type.
Declaration: int _get_try_use_32bit_index()
update_queue_auto_flush_delay
Document:
update_queue_auto_flush_delay(type:int, default:2): when size of a update queue is great than this value, update queue trigger auto flush(default 2).
Declaration: int _get_update_queue_auto_flush_delay()
use_cuda
Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
use_cuda_managed_allocator
Document:
use_cuda_managed_allocator(type:int, default:1): Enable cuda_managed_allocator
Declaration: int get_use_cuda_managed_allocator()
use_nfef_allocator
Document:
use_nfef_allocator(type:int, default:0): Enable never free exact fit allocator
Declaration: int get_use_nfef_allocator()
use_parallel_op_compiler
Document:
use_parallel_op_compiler(type:int, default:16): Number of threads that parallel op comiler used, default 16, set this value to 0 will disable parallel op compiler.
Declaration: int get_use_parallel_op_compiler()
use_sfrl_allocator
Document:
use_sfrl_allocator(type:int, default:1): Enable sfrl allocator
Declaration: int get_use_sfrl_allocator()
use_stat_allocator
Document:
use_stat_allocator(type:int, default:0): Enable stat allocator
Declaration: int get_use_stat_allocator()
jittor_core.gc()
Declaration: void gc_all()
jittor_core.get_device_count()
Declaration: inline int get_device_count()
jittor_core.get_mem_info()
Declaration: inline MemInfo get_mem_info()
jittor_core.grad()
Declaration: vector<VarHolder*> grad(VarHolder* loss, const vector<VarHolder*>& targets)
jittor_core.graph_check()
Declaration: void do_graph_check()
jittor_core.hash()
Document:
simple hash function
Declaration: inline uint hash(const char* input)
jittor_core.number_of_hold_vars()
Declaration: inline static uint64 get_number_of_hold_vars()
jittor_core.number_of_lived_ops()
Declaration: inline static int64 get_number_of_lived_ops()
jittor_core.number_of_lived_vars()
Declaration: inline static int64 get_number_of_lived_vars()
jittor_core.print_trace()
Declaration: inline static void print_trace()
jittor_core.seed()
Declaration: void set_seed(int seed)
jittor_core.set_lock_path()
Declaration: void set_lock_path(string path)
jittor_core.set_seed()
Declaration: void set_seed(int seed)
jittor_core.sync()
Declaration: void sync(const vector<VarHolder*>& vh=vector<VarHolder*>(), bool device_sync=false)
jittor_core.sync_all()
Declaration: void sync_all(bool device_sync=false)
jittor_core.tape_together()
Declaration: void tape_together(
const vector<VarHolder*>& taped_inputs, const vector<VarHolder*>& taped_outputs, GradCallback&& grad_callback
)
jittor.ops
這里是Jittor的基礎算子模塊的API文檔,該API可以通過jittor.ops.XXX或者jittor.XXX直接訪問。
jittor_core.ops.abs()
Declaration: VarHolder* abs(VarHolder* x)
jittor_core.ops.acos()
Declaration: VarHolder* acos(VarHolder* x)
jittor_core.ops.acosh()
Declaration: VarHolder* acosh(VarHolder* x)
jittor_core.ops.add()
Declaration: VarHolder* add(VarHolder* x, VarHolder* y)
jittor_core.ops.all
()
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and
(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and
(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.any
()
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or
(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or
(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.arccos()
Declaration: VarHolder* acos(VarHolder* x)
jittor_core.ops.arccosh()
Declaration: VarHolder* acosh(VarHolder* x)
jittor_core.ops.arcsin()
Declaration: VarHolder* asin(VarHolder* x)
jittor_core.ops.arcsinh()
Declaration: VarHolder* asinh(VarHolder* x)
jittor_core.ops.arctan()
Declaration: VarHolder* atan(VarHolder* x)
jittor_core.ops.arctanh()
Declaration: VarHolder* atanh(VarHolder* x)
jittor_core.ops.arg_reduce()
Declaration: vector<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)
jittor_core.ops.argsort()
Document: *
Argsort Operator Perform an indirect sort by given key or compare function.
x is input, y is output index, satisfy:
x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]
or
key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])
or
compare(y[0], y[1]) && compare(y[1], y[2]) && …
? [in] x: input var for sort
? [in] dim: sort alone which dim
? [in] descending: the elements are sorted in descending order or not(default False).
? [in] dtype: type of return indexes
? [out] index: index have the same size with sorted dim
? [out] value: sorted value
Example:
index, value = jt.argsort([11,13,12])

return [0 2 1], [11 12 13]

index, value = jt.argsort([11,13,12], descending=True)

return [1 2 0], [13 12 11]

index, value = jt.argsort([[11,13,12], [12,11,13]])

return [[0 2 1],[1 0 2]], [[11 12 13],[11 12 13]]

index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0)

return [[0 1 0],[1 0 1]], [[11 11 12],[12 13 13]]

Declaration: vector<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)
jittor_core.ops.array()
Declaration: VarHolder* array__(PyObject* obj)
jittor_core.ops.array_()
Declaration: VarHolder* array_(ArrayArgs&& args)
jittor_core.ops.asin()
Declaration: VarHolder* asin(VarHolder* x)
jittor_core.ops.asinh()
Declaration: VarHolder* asinh(VarHolder* x)
jittor_core.ops.atan()
Declaration: VarHolder* atan(VarHolder* x)
jittor_core.ops.atanh()
Declaration: VarHolder* atanh(VarHolder* x)
jittor_core.ops.binary()
Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)
jittor_core.ops.bitwise_and()
Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)
jittor_core.ops.bitwise_not()
Declaration: VarHolder* bitwise_not(VarHolder* x)
jittor_core.ops.bitwise_or()
Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)
jittor_core.ops.bitwise_xor()
Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)
jittor_core.ops.bool()
Declaration: VarHolder* bool_(VarHolder* x)
jittor_core.ops.broadcast()
Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.ops.broadcast_var()
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.ops.candidate()
Document: *
Candidate Operator Perform an indirect candidate filter by given a fail condition.
x is input, y is output index, satisfy:
not fail_cond(y[0], y[1]) and
not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and

… and not fail_cond(y[m-2], y[m-1])
Where m is number of selected candidates.
Pseudo code:
y = []
for i in range(n):
pass = True
for j in y:
if (@fail_cond):
pass = false
break
if (pass):
y.append(i)
return y
? [in] x: input var for filter
? [in] fail_cond: code for fail condition
? [in] dtype: type of return indexes
? [out] index: .
Example:
jt.candidate(jt.random(100,2), ‘(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))’)

return y satisfy:

x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and … and x[y[m-2], 0] <= x[y[m-1], 0] and

x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and … and x[y[m-2], 1] <= x[y[m-1], 1]

Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)
jittor_core.ops.cast()
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
jittor_core.ops.ceil()
Declaration: VarHolder* ceil(VarHolder* x)
jittor_core.ops.clone()
Declaration: VarHolder* clone(VarHolder* x)
jittor_core.ops.code()
Document: *
Code Operator for easily customized op.
? [in] shape: the output shape, a integer array
? [in] dtype: the output data type
? [in] inputs: A list of input jittor Vars
? [in] cpu_src: cpu source code string, buildin value:
o in{x}, in{x}_shape{y}, in{x}_stride{y}, in{x}_type, in{x}_p, @in0(…)
o out{x}, out{x}_shape{y}, out{x}_stride{y}, out{x}_type, out{x}_p, @out0(…)
o out, out_shape{y}, out_stride{y}, out_type, out_p, @out(…)
? [in] cpu_header: cpu header code string.
? [in] cuda_src: cuda source code string.
? [in] cuda_header: cuda header code string.
Example-1:
from jittor import Function
import jittor as jt

class Func(Function):
def execute(self, x):
self.save_vars = x
return jt.code(x.shape, x.dtype, [x],
cpu_src=’’’
for (int i=0; i<in0_shape0; i++)
@out(i) = @in0(i)*@in0(i)*2;
‘’’)

def grad(self, grad_x):x = self.save_varsreturn jt.code(x.shape, x.dtype, [x, grad_x],cpu_src='''for (int i=0; i<in0_shape0; i++)@out(i) = @in1(i)*@in0(i)*4;''')

a = jt.random([10])
func = Func()
b = func(a)
print(b)
print(jt.grad(b,a))
Example-2:
a = jt.array([3,2,1])
b = jt.code(a.shape, a.dtype, [a],
cpu_header="""
#include
@alias(a, in0)
@alias(b, out)
“”",
cpu_src="""
for (int i=0; i<a_shape0; i++)
@b(i) = @a(i);
std::sort(&@b(0), &@b(in0_shape0));
“”"
)
assert (b.data==[1,2,3]).all()
Example-3:
#This example shows how to set multiple outputs in code op.
a = jt.array([3,2,1])
b,c = jt.code([(1,), (1,)], [a.dtype, a.dtype], [a],
cpu_header="""
#include
using namespace std;
“”",
cpu_src="""
@alias(a, in0)
@alias(b, out0)
@alias(c, out1)
@b(0) = @c(0) = @a(0);
for (int i=0; i<a_shape0; i++) {
@b(0) = std::min(@b(0), @a(i));
@c(0) = std::max(@c(0), @a(i));
}
cout << “min:” << @b(0) << " max:" << @c(0) << endl;
“”"
)
assert b.data == 1, b
assert c.data == 3, c
Example-4:
#This example shows how to use dynamic shape of jittor variables.
a = jt.array([5,-4,3,-2,1])

negtive shape for max size of vary dimension

b,c = jt.code([(-5,), (-5,)], [a.dtype, a.dtype], [a],
cpu_src="""
@alias(a, in0)
@alias(b, out0)
@alias(c, out1)
int num_b=0, num_c=0;
for (int i=0; i<a_shape0; i++) {
if (@a(i)>0)
@b(num_b++) = @a(i);
else
@c(num_c++) = @a(i);
}
b->set_shape({num_b});
c->set_shape({num_c});
“”"
)
assert (b.data == [5,3,1]).all()
assert (c.data == [-4,-2]).all()
CUDA Example-1:
#This example shows how to use CUDA in code op.
import jittor as jt
from jittor import Function
jt.flags.use_cuda = 1

class Func(Function):
def execute(self, a, b):
self.save_vars = a, b
return jt.code(a.shape, a.dtype, [a,b],
cuda_src=’’’
global static void kernel1(@ARGS_DEF) {
@PRECALC
int i = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
for (; i<in0_shape0; i+=stride)
@out(i) = @in0(i)*@in1(i);
}
kernel1<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS);
‘’’)

def grad(self, grad):a, b = self.save_varsreturn jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad],cuda_src='''__global__ static void kernel2(@ARGS_DEF) {@PRECALCint i = threadIdx.x + blockIdx.x * blockDim.x;int stride = blockDim.x * gridDim.x;for (; i<in0_shape0; i+=stride) {@out0(i) = @in2(i)*@in1(i);@out1(i) = @in2(i)*@in0(i);}}kernel2<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS);''')

a = jt.random([100000])
b = jt.random([100000])
func = Func()
c = func(a,b)
print?
print(jt.grad(c, [a, b]))
CUDA Example-2:
#This example shows how to use multi dimension data with CUDA.
import jittor as jt
from jittor import Function
jt.flags.use_cuda = 1

class Func(Function):
def execute(self, a, b):
self.save_vars = a, b
return jt.code(a.shape, a.dtype, [a,b],
cuda_src=’’’
global static void kernel1(@ARGS_DEF) {
@PRECALC
for (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x)
for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x)
@out(i,j) = @in0(i,j)*@in1(i,j);
}
kernel1<<<32, 32>>>(@ARGS);
‘’’)

def grad(self, grad):a, b = self.save_varsreturn jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad],cuda_src='''__global__ static void kernel2(@ARGS_DEF) {@PRECALCfor (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x)for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x) {@out0(i,j) = @in2(i,j)*@in1(i,j);@out1(i,j) = @in2(i,j)*@in0(i,j);}}kernel2<<<32, 32>>>(@ARGS);''')

a = jt.random((100,100))
b = jt.random((100,100))
func = Func()
c = func(a,b)
print?
print(jt.grad(c, [a, b]))
Declaration: VarHolder* code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector&& cuda_grad_src={}, string&& cuda_header=””) Declaration: vector<VarHolder*> code_(vector&& shapes, vector&& dtypes, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector&& cuda_grad_src={}, string&& cuda_header=””) Declaration: vector<VarHolder*> code__(vector<VarHolder*>&& inputs, vector<VarHolder*>&& outputs, string&& cpu_src=””, vector&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector&& cuda_grad_src={}, string&& cuda_header=””)
jittor_core.ops.copy()
Declaration: VarHolder* copy(VarHolder* x)
jittor_core.ops.cos()
Declaration: VarHolder* cos(VarHolder* x)
jittor_core.ops.cosh()
Declaration: VarHolder* cosh(VarHolder* x)
jittor_core.ops.divide()
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
jittor_core.ops.empty()
Declaration: VarHolder* empty(NanoVector shape, NanoString dtype=ns_float32)
jittor_core.ops.equal()
Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)
jittor_core.ops.erf()
Declaration: VarHolder* erf(VarHolder* x)
jittor_core.ops.exp()
Declaration: VarHolder* exp(VarHolder* x)
jittor_core.ops.fetch()
Declaration: VarHolder* fetch(vector<VarHolder*>&& inputs, FetchFunc&& func)
jittor_core.ops.float32()
Declaration: VarHolder* float32_(VarHolder* x)
jittor_core.ops.float64()
Declaration: VarHolder* float64_(VarHolder* x)
jittor_core.ops.floor()
Declaration: VarHolder* floor(VarHolder* x)
jittor_core.ops.floor_divide()
Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)
jittor_core.ops.getitem()
Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)
jittor_core.ops.greater()
Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)
jittor_core.ops.greater_equal()
Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)
jittor_core.ops.index()
Document: *
Index Operator generate index of shape.
It performs equivalent Python-pseudo implementation below:
n = len(shape)-1
x = np.zeros(shape, dtype)
for i0 in range(shape[0]): # 1-st loop
for i1 in range(shape[1]): # 2-nd loop
… # many loops
for in in range(shape[n]) # n+1 -th loop
x[i0,i1,…,in] = i@dim
? [in] shape: the output shape, a integer array
? [in] dim: the dim of the index.
? [in] dtype: the data type string, default int32
Example:
print(jt.index([2,2], 0)())

output: [[0,0],[1,1]]

print(jt.index([2,2], 1)())

output: [[0,1],[0,1]]

Declaration: VarHolder* index(NanoVector shape, int64 dim, NanoString dtype=ns_int32) Declaration: vector<VarHolder*> index_(NanoVector shape, NanoString dtype=ns_int32)Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
jittor_core.ops.index_var()
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
jittor_core.ops.int16()
Declaration: VarHolder* int16_(VarHolder* x)
jittor_core.ops.int32()
Declaration: VarHolder* int32_(VarHolder* x)
jittor_core.ops.int64()
Declaration: VarHolder* int64_(VarHolder* x)
jittor_core.ops.int8()
Declaration: VarHolder* int8_(VarHolder* x)
jittor_core.ops.left_shift()
Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)
jittor_core.ops.less()
Declaration: VarHolder* less(VarHolder* x, VarHolder* y)
jittor_core.ops.less_equal()
Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)
jittor_core.ops.log()
Declaration: VarHolder* log(VarHolder* x)
jittor_core.ops.logical_and()
Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)
jittor_core.ops.logical_not()
Declaration: VarHolder* logical_not(VarHolder* x)
jittor_core.ops.logical_or()
Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)
jittor_core.ops.logical_xor()
Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)
jittor_core.ops.max()
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.maximum()
Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)
jittor_core.ops.mean()
Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.min()
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.minimum()
Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)
jittor_core.ops.mod()
Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)
jittor_core.ops.multiply()
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
jittor_core.ops.negative()
Declaration: VarHolder* negative(VarHolder* x)
jittor_core.ops.not_equal()
Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)
jittor_core.ops.numpy_code()
Document: *
Numpy Code Operator for easily customized op.
? [in] shape: the output shape, a integer array
? [in] dtype: the output data type
? [in] inputs: A list of input jittor Vars
? [in] forward: function, represents forward python function
? [in] backward: A list of function, represents gradiant for each input
Example-1:
def forward_code(np, data):
a = data[“inputs”][0]
b = data[“outputs”][0]
np.add(a,a,out=b)

def backward_code(np, data):
dout = data[“dout”]
out = data[“outputs”][0]
np.copyto(out, dout*2.0)

a = jt.random((5,1))
b = jt.numpy_code(
a.shape,
a.dtype,
[a],
forward_code,
[backward_code],
)
Example-2:
def forward_code(np, data):
a,b = data[“inputs”]
c,d = data[“outputs”]
np.add(a,b,out=c)
np.subtract(a,b,out=d)

def backward_code1(np, data):
dout = data[“dout”]
out = data[“outputs”][0]
np.copyto(out, dout)

def backward_code2(np, data):
dout = data[“dout”]
out_index = data[“out_index”]
out = data[“outputs”][0]
if out_index==0:
np.copyto(out, dout)
else:
np.negative(dout, out)

a = jt.random((5,1))
b = jt.random((5,1))
c, d = jt.numpy_code(
[a.shape, a.shape],
[a.dtype, a.dtype],
[a, b],
forward_code,
[backward_code1,backward_code2],
)
Declaration: VarHolder* numpy_code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector&& backward) Declaration: vector<VarHolder*> numpy_code_(vector&& shapes, vector&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector&& backward) Declaration: VarHolder* numpy_code__(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward) Declaration: vector<VarHolder*> numpy_code___(vector&& shapes, vector&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward)
jittor_core.ops.pow()
Declaration: VarHolder* pow(VarHolder* x, VarHolder* y)
jittor_core.ops.prod()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.product()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.random()
Declaration: VarHolder* random(NanoVector shape, NanoString dtype=ns_float32, NanoString type=ns_uniform)
jittor_core.ops.reduce()
Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false) Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)
jittor_core.ops.reduce_add()
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_bitwise_and()
Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_bitwise_or()
Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_bitwise_xor()
Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_logical_and()
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_logical_or()
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_logical_xor()
Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_maximum()
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_minimum()
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reduce_multiply()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.reindex()
Document: *
Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:

input is x, output is y

n = len(shape)-1
m = len(x.shape)-1
k = len(overflow_conditions)-1
y = np.zeros(shape, x.dtype)
for i0 in range(shape[0]): # 1-st loop
for i1 in range(shape[1]): # 2-nd loop
… # many loops
for in in range(shape[n]) # n+1 -th loop
if is_overflow(i0,i1,…,in):
y[i0,i1,…,in] = overflow_value
else:
# indexes[i] is a c++ style integer expression consisting of i0,i1,…,in
y[i0,i1,…,in] = x[indexes[0],indexes[1],…,indexes[m]]

is_overflow is defined as following

def is_overflow(i0,i1,…,in):
return (
indexes[0] < 0 || indexes[0] >= x.shape[0] ||
indexes[1] < 0 || indexes[1] >= x.shape[1] ||

indexes[m] < 0 || indexes[m] >= x.shape[m] ||

    # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,inoverflow_conditions[0] ||overflow_conditions[1] ||......overflow_conditions[k]
)

? [in] x: A input jittor Var
? [in] shape: the output shape, a integer array
? [in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
? XDIM, xshape0, …, xshapen, xstride0, …, xstriden
? YDIM, yshape0, …, yshapem, ystride0, …, ystridem
? i0, i1, …, in
? @e0(…), @e1(…) for extras input index
? e0p, e1p , … for extras input pointer
? [in] overflow_value: overflow value
? [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
? [in] extras: extra var used for index
Example Convolution implemented by reindex operation:
def conv(x, w):
N,H,W,C = x.shape
Kh, Kw, _C, Kc = w.shape
assert C==C
xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [
‘i0’, # Nid
‘i1+i3’, # Hid+Khid
‘i2+i4’, # Wid+KWid
‘i5’, # Cid
])
ww = w.broadcast_var(xx)
yy = xxww
y = yy.sum([3,4,5]) # Kh, Kw, C
return y, yy
Declaration: VarHolder
reindex(VarHolder* x, NanoVector shape, vector&& indexes, float64 overflow_value=0, vector&& overflow_conditions={}, vector<VarHolder*>&& extras={})Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex
(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector&& overflow_conditions={})
jittor_core.ops.reindex_reduce()
Document: *
Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:

input is y, output is x

n = len(y.shape)-1
m = len(shape)-1
k = len(overflow_conditions)-1
x = np.zeros(shape, y.dtype)
x[:] = initial_value(op)
for i0 in range(y.shape[0]): # 1-st loop
for i1 in range(y.shape[1]): # 2-nd loop
… # many loops
for in in range(y.shape[n]) # n+1 -th loop
# indexes[i] is a c++ style integer expression consisting of i0,i1,…,in
xi0,xi1,…,xim = indexes[0],indexes[1],…,indexes[m]
if not is_overflow(xi0,xi1,…,xim):
x[xi0,xi1,…,xim] = op(x[xi0,xi1,…,xim], y[i0,i1,…,in])

is_overflow is defined as following

def is_overflow(xi0,xi1,…,xim):
return (
xi0 < 0 || xi0 >= shape[0] ||
xi1 < 0 || xi1 >= shape[1] ||

xim < 0 || xim >= shape[m] ||

    # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,inoverflow_conditions[0] ||overflow_conditions[1] ||......overflow_conditions[k]
)

? [in] y: A input jittor Var
? [in] op: a string represent the reduce operation type
? [in] shape: the output shape, a integer array
? [in] indexes: array of c++ style integer expression, its length should be the same with length of shape, some buildin variables it can use are:
? XDIM, xshape0, …, xshapem, xstride0, …, xstridem
? YDIM, yshape0, …, yshapen, ystride0, …, ystriden
? i0, i1, …, in
? @e0(…), @e1(…) for extras input index
? e0p, e1p , … for extras input pointer
? [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
? [in] extras: extra var used for index
Example
Pooling implemented by reindex operation:
def pool(x, size, op):
N,H,W,C = x.shape
h = (H+size-1)//size
w = (W+size-1)//size
return x.reindex_reduce(op, [N,h,w,C], [
“i0”, # Nid
f"i1/{size}", # Hid
f"i2/{size}", # Wid
“i3”, # Cid
])
Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector&& indexes, vector&& overflow_conditions={}, vector<VarHolder*>&& extras={})
jittor_core.ops.reindex_var()
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector&& overflow_conditions={})
jittor_core.ops.reshape()
Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)
jittor_core.ops.right_shift()
Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)
jittor_core.ops.round()
Declaration: VarHolder* round(VarHolder* x)
jittor_core.ops.setitem()
Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)
jittor_core.ops.sigmoid()
Declaration: VarHolder* sigmoid(VarHolder* x)
jittor_core.ops.sin()
Declaration: VarHolder* sin(VarHolder* x)
jittor_core.ops.sinh()
Declaration: VarHolder* sinh(VarHolder* x)
jittor_core.ops.sqrt()
Declaration: VarHolder* sqrt(VarHolder* x)
jittor_core.ops.subtract()
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
jittor_core.ops.sum()
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.ops.tan()
Declaration: VarHolder* tan(VarHolder* x)
jittor_core.ops.tanh()
Declaration: VarHolder* tanh(VarHolder* x)
jittor_core.ops.tape()
Declaration: VarHolder* tape(VarHolder* x)
jittor_core.ops.ternary()
Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)
jittor_core.ops.transpose()
Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())
jittor_core.ops.uint16()
Declaration: VarHolder* uint16_(VarHolder* x)
jittor_core.ops.uint32()
Declaration: VarHolder* uint32_(VarHolder* x)
jittor_core.ops.uint64()
Declaration: VarHolder* uint64_(VarHolder* x)
jittor_core.ops.uint8()
Declaration: VarHolder* uint8_(VarHolder* x)
jittor_core.ops.unary()
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
jittor_core.ops.where()
Document: *
Where Operator generate index of true condition.
? [in] cond: condition for index generation
? [in] dtype: type of return indexes
? [out] out: return an array of indexes, same length with number of dims of cond
Example:
jt.where([[0,0,1],[1,0,0]])

return ( [0,2], [1,0] )

Declaration: vector<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)
jittor.Var
這里是Jittor的基礎變量類的API文檔。該API可以通過my_jittor_var.XXX直接訪問。
jittor_core.Var.abs()
Declaration: VarHolder* abs(VarHolder* x)
jittor_core.Var.acos()
Declaration: VarHolder* acos(VarHolder* x)
jittor_core.Var.acosh()
Declaration: VarHolder* acosh(VarHolder* x)
jittor_core.Var.add()
Declaration: VarHolder* add(VarHolder* x, VarHolder* y)
jittor_core.Var.all_()
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.any_()
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.arccos()
Declaration: VarHolder* acos(VarHolder* x)
jittor_core.Var.arccosh()
Declaration: VarHolder* acosh(VarHolder* x)
jittor_core.Var.arcsin()
Declaration: VarHolder* asin(VarHolder* x)
jittor_core.Var.arcsinh()
Declaration: VarHolder* asinh(VarHolder* x)
jittor_core.Var.arctan()
Declaration: VarHolder* atan(VarHolder* x)
jittor_core.Var.arctanh()
Declaration: VarHolder* atanh(VarHolder* x)
jittor_core.Var.arg_reduce()
Declaration: vector<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)
jittor_core.Var.argsort()
Document: *
Argsort Operator Perform an indirect sort by given key or compare function.
x is input, y is output index, satisfy:
x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]
or
key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])
or
compare(y[0], y[1]) && compare(y[1], y[2]) && …
? [in] x: input var for sort
? [in] dim: sort alone which dim
? [in] descending: the elements are sorted in descending order or not(default False).
? [in] dtype: type of return indexes
? [out] index: index have the same size with sorted dim
? [out] value: sorted value
Example:
index, value = jt.argsort([11,13,12])

return [0 2 1], [11 12 13]

index, value = jt.argsort([11,13,12], descending=True)

return [1 2 0], [13 12 11]

index, value = jt.argsort([[11,13,12], [12,11,13]])

return [[0 2 1],[1 0 2]], [[11 12 13],[11 12 13]]

index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0)

return [[0 1 0],[1 0 1]], [[11 11 12],[12 13 13]]

Declaration: vector<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)
jittor_core.Var.asin()
Declaration: VarHolder* asin(VarHolder* x)
jittor_core.Var.asinh()
Declaration: VarHolder* asinh(VarHolder* x)
jittor_core.Var.assign()
Declaration: VarHolder* assign(VarHolder* v)
jittor_core.Var.atan()
Declaration: VarHolder* atan(VarHolder* x)
jittor_core.Var.atanh()
Declaration: VarHolder* atanh(VarHolder* x)
jittor_core.Var.binary()
Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)
jittor_core.Var.bitwise_and()
Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)
jittor_core.Var.bitwise_not()
Declaration: VarHolder* bitwise_not(VarHolder* x)
jittor_core.Var.bitwise_or()
Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)
jittor_core.Var.bitwise_xor()
Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)
jittor_core.Var.bool()
Declaration: VarHolder* bool_(VarHolder* x)
jittor_core.Var.broadcast()
Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.Var.broadcast_var()
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.Var.candidate()
Document: *
Candidate Operator Perform an indirect candidate filter by given a fail condition.
x is input, y is output index, satisfy:
not fail_cond(y[0], y[1]) and
not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and

… and not fail_cond(y[m-2], y[m-1])
Where m is number of selected candidates.
Pseudo code:
y = []
for i in range(n):
pass = True
for j in y:
if (@fail_cond):
pass = false
break
if (pass):
y.append(i)
return y
? [in] x: input var for filter
? [in] fail_cond: code for fail condition
? [in] dtype: type of return indexes
? [out] index: .
Example:
jt.candidate(jt.random(100,2), ‘(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))’)

return y satisfy:

x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and … and x[y[m-2], 0] <= x[y[m-1], 0] and

x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and … and x[y[m-2], 1] <= x[y[m-1], 1]

Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)
jittor_core.Var.cast()
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
jittor_core.Var.ceil()
Declaration: VarHolder* ceil(VarHolder* x)
jittor_core.Var.clone()
Declaration: VarHolder* clone(VarHolder* x)
jittor_core.Var.compile_options
Declaration: inline loop_options_t compile_options()
jittor_core.Var.copy()
Declaration: VarHolder* copy(VarHolder* x)
jittor_core.Var.cos()
Declaration: VarHolder* cos(VarHolder* x)
jittor_core.Var.cosh()
Declaration: VarHolder* cosh(VarHolder* x)
jittor_core.Var.data
Document: * Get a numpy array which share the data with the var. Declaration: inline DataView data()
jittor_core.Var.debug_msg()
Declaration: string debug_msg()
jittor_core.Var.detach()
Document:
detach the grad
Declaration: inline VarHolder* detach()
jittor_core.Var.divide()
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
jittor_core.Var.double()
Declaration: VarHolder* float64_(VarHolder* x)
jittor_core.Var.dtype
Declaration: inline NanoString dtype()
jittor_core.Var.equal()
Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)
jittor_core.Var.erf()
Declaration: VarHolder* erf(VarHolder* x)
jittor_core.Var.exp()
Declaration: VarHolder* exp(VarHolder* x)
jittor_core.Var.expand()
Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector()) Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.Var.expand_as()
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
jittor_core.Var.fetch_sync()
Declaration: ArrayArgs fetch_sync()
jittor_core.Var.float()
Declaration: VarHolder* float32_(VarHolder* x)
jittor_core.Var.float32()
Declaration: VarHolder* float32_(VarHolder* x)
jittor_core.Var.float64()
Declaration: VarHolder* float64_(VarHolder* x)
jittor_core.Var.floor()
Declaration: VarHolder* floor(VarHolder* x)
jittor_core.Var.floor_divide()
Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)
jittor_core.Var.getitem()
Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)
jittor_core.Var.greater()
Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)
jittor_core.Var.greater_equal()
Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)
jittor_core.Var.index()
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
jittor_core.Var.index_var()
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
jittor_core.Var.int()
Declaration: VarHolder* int32_(VarHolder* x)
jittor_core.Var.int16()
Declaration: VarHolder* int16_(VarHolder* x)
jittor_core.Var.int32()
Declaration: VarHolder* int32_(VarHolder* x)
jittor_core.Var.int64()
Declaration: VarHolder* int64_(VarHolder* x)
jittor_core.Var.int8()
Declaration: VarHolder* int8_(VarHolder* x)
jittor_core.Var.is_stop_fuse()
Declaration: inline bool is_stop_fuse()
jittor_core.Var.is_stop_grad()
Declaration: inline bool is_stop_grad()
jittor_core.Var.item()
Document: * Get one item data Declaration: ItemData item()
jittor_core.Var.left_shift()
Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)
jittor_core.Var.less()
Declaration: VarHolder* less(VarHolder* x, VarHolder* y)
jittor_core.Var.less_equal()
Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)
jittor_core.Var.log()
Declaration: VarHolder* log(VarHolder* x)
jittor_core.Var.logical_and()
Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)
jittor_core.Var.logical_not()
Declaration: VarHolder* logical_not(VarHolder* x)
jittor_core.Var.logical_or()
Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)
jittor_core.Var.logical_xor()
Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)
jittor_core.Var.max()
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.maximum()
Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)
jittor_core.Var.mean()
Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.min()
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.minimum()
Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)
jittor_core.Var.mod()
Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)
jittor_core.Var.multiply()
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
jittor_core.Var.name()
Declaration: inline VarHolder* name(const char* s) Declaration: inline const char* name()
jittor_core.Var.ndim
Declaration: inline int ndim()
jittor_core.Var.negative()
Declaration: VarHolder* negative(VarHolder* x)
jittor_core.Var.not_equal()
Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)
jittor_core.Var.numel()
Declaration: inline int64 numel()
jittor_core.Var.numpy()
Declaration: ArrayArgs fetch_sync()
jittor_core.Var.prod()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.product()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce()
Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false) Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)
jittor_core.Var.reduce_add()
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_bitwise_and()
Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_bitwise_or()
Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_bitwise_xor()
Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_logical_and()
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_logical_or()
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_logical_xor()
Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_maximum()
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_minimum()
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reduce_multiply()
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.reindex()
Document: *
Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:

input is x, output is y

n = len(shape)-1
m = len(x.shape)-1
k = len(overflow_conditions)-1
y = np.zeros(shape, x.dtype)
for i0 in range(shape[0]): # 1-st loop
for i1 in range(shape[1]): # 2-nd loop
… # many loops
for in in range(shape[n]) # n+1 -th loop
if is_overflow(i0,i1,…,in):
y[i0,i1,…,in] = overflow_value
else:
# indexes[i] is a c++ style integer expression consisting of i0,i1,…,in
y[i0,i1,…,in] = x[indexes[0],indexes[1],…,indexes[m]]

is_overflow is defined as following

def is_overflow(i0,i1,…,in):
return (
indexes[0] < 0 || indexes[0] >= x.shape[0] ||
indexes[1] < 0 || indexes[1] >= x.shape[1] ||

indexes[m] < 0 || indexes[m] >= x.shape[m] ||

    # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,inoverflow_conditions[0] ||overflow_conditions[1] ||......overflow_conditions[k]
)

? [in] x: A input jittor Var
? [in] shape: the output shape, a integer array
? [in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
? XDIM, xshape0, …, xshapen, xstride0, …, xstriden
? YDIM, yshape0, …, yshapem, ystride0, …, ystridem
? i0, i1, …, in
? @e0(…), @e1(…) for extras input index
? e0p, e1p , … for extras input pointer
? [in] overflow_value: overflow value
? [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
? [in] extras: extra var used for index
Example Convolution implemented by reindex operation:
def conv(x, w):
N,H,W,C = x.shape
Kh, Kw, _C, Kc = w.shape
assert C==C
xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [
‘i0’, # Nid
‘i1+i3’, # Hid+Khid
‘i2+i4’, # Wid+KWid
‘i5’, # Cid
])
ww = w.broadcast_var(xx)
yy = xxww
y = yy.sum([3,4,5]) # Kh, Kw, C
return y, yy
Declaration: VarHolder
reindex(VarHolder* x, NanoVector shape, vector&& indexes, float64 overflow_value=0, vector&& overflow_conditions={}, vector<VarHolder*>&& extras={})Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex
(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector&& overflow_conditions={})
jittor_core.Var.reindex_reduce()
Document: *
Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:

input is y, output is x

n = len(y.shape)-1
m = len(shape)-1
k = len(overflow_conditions)-1
x = np.zeros(shape, y.dtype)
x[:] = initial_value(op)
for i0 in range(y.shape[0]): # 1-st loop
for i1 in range(y.shape[1]): # 2-nd loop
… # many loops
for in in range(y.shape[n]) # n+1 -th loop
# indexes[i] is a c++ style integer expression consisting of i0,i1,…,in
xi0,xi1,…,xim = indexes[0],indexes[1],…,indexes[m]
if not is_overflow(xi0,xi1,…,xim):
x[xi0,xi1,…,xim] = op(x[xi0,xi1,…,xim], y[i0,i1,…,in])

is_overflow is defined as following

def is_overflow(xi0,xi1,…,xim):
return (
xi0 < 0 || xi0 >= shape[0] ||
xi1 < 0 || xi1 >= shape[1] ||

xim < 0 || xim >= shape[m] ||

    # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,inoverflow_conditions[0] ||overflow_conditions[1] ||......overflow_conditions[k]
)

? [in] y: A input jittor Var
? [in] op: a string represent the reduce operation type
? [in] shape: the output shape, a integer array
? [in] indexes: array of c++ style integer expression, its length should be the same with length of shape, some buildin variables it can use are:
? XDIM, xshape0, …, xshapem, xstride0, …, xstridem
? YDIM, yshape0, …, yshapen, ystride0, …, ystriden
? i0, i1, …, in
? @e0(…), @e1(…) for extras input index
? e0p, e1p , … for extras input pointer
? [in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
? [in] extras: extra var used for index
Example
Pooling implemented by reindex operation:
def pool(x, size, op):
N,H,W,C = x.shape
h = (H+size-1)//size
w = (W+size-1)//size
return x.reindex_reduce(op, [N,h,w,C], [
“i0”, # Nid
f"i1/{size}", # Hid
f"i2/{size}", # Wid
“i3”, # Cid
])
Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector&& indexes, vector&& overflow_conditions={}, vector<VarHolder*>&& extras={})
jittor_core.Var.reindex_var()
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector&& overflow_conditions={})
jittor_core.Var.requires_grad
Declaration: inline bool get_requires_grad()
jittor_core.Var.right_shift()
Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)
jittor_core.Var.round()
Declaration: VarHolder* round(VarHolder* x)
jittor_core.Var.setitem()
Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)
jittor_core.Var.shape
Declaration: inline NanoVector shape()
jittor_core.Var.share_with()
Declaration: inline VarHolder* share_with(VarHolder* other)
jittor_core.Var.sigmoid()
Declaration: VarHolder* sigmoid(VarHolder* x)
jittor_core.Var.sin()
Declaration: VarHolder* sin(VarHolder* x)
jittor_core.Var.sinh()
Declaration: VarHolder* sinh(VarHolder* x)
jittor_core.Var.sqrt()
Declaration: VarHolder* sqrt(VarHolder* x)
jittor_core.Var.stop_fuse()
Declaration: inline VarHolder* stop_fuse()
jittor_core.Var.stop_grad()
Declaration: inline VarHolder* stop_grad()
jittor_core.Var.subtract()
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
jittor_core.Var.sum()
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false) Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false) Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
jittor_core.Var.swap()
Declaration: inline VarHolder* swap(VarHolder* v)
jittor_core.Var.sync()
Declaration: void sync(bool device_sync = false)
jittor_core.Var.tan()
Declaration: VarHolder* tan(VarHolder* x)
jittor_core.Var.tanh()
Declaration: VarHolder* tanh(VarHolder* x)
jittor_core.Var.tape()
Declaration: VarHolder* tape(VarHolder* x)
jittor_core.Var.ternary()
Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)
jittor_core.Var.uint16()
Declaration: VarHolder* uint16_(VarHolder* x)
jittor_core.Var.uint32()
Declaration: VarHolder* uint32_(VarHolder* x)
jittor_core.Var.uint64()
Declaration: VarHolder* uint64_(VarHolder* x)
jittor_core.Var.uint8()
Declaration: VarHolder* uint8_(VarHolder* x)
jittor_core.Var.unary()
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
jittor_core.Var.uncertain_shape
Declaration: inline NanoVector uncertain_shape()
jittor_core.Var.update()
Document:
update parameter and global variable,
different from assign, it will stop grad between origin var and assigned var, and will update in the background
Declaration: VarHolder* update(VarHolder* v)
jittor_core.Var.where()
Document: *
Where Operator generate index of true condition.
? [in] cond: condition for index generation
? [in] dtype: type of return indexes
? [out] out: return an array of indexes, same length with number of dims of cond
Example:
jt.where([[0,0,1],[1,0,0]])

return ( [0,2], [1,0] )

Declaration: vector<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)
jittor.Misc
這里是Jittor的基礎算子模塊的API文檔,該API可以通過jittor.misc.XXX或者jittor.XXX直接訪問。
jittor.misc.all(x, dim=[])
jittor.misc.any(x, dim)
jittor.misc.arange(start=0, end=None, step=1, dtype=None)
jittor.misc.arctan2(y, x)
jittor.misc.auto_parallel(n, src, **kw)
auto parallel(CPU and GPU) n-d for loop function like below:
Before:
void inner_func(int n0, int i0, int n1, int i1) {

}
for (int i0=0; i0<n0; i0++)
for (int i1=0; i1<n1; i1++)
inner_func(n0, i0, n1, i1, …);
After:
@python.jittor.auto_parallel(2) void inner_func(int n0, int i0, int n1, int i1) {

}
inner_func(n0, 0, n1, 0, …);
jittor.misc.chunk(x, chunks, dim=0)
Splits a var into a specific number of chunks. Each chunk is a view of the input var.
Last chunk will be smaller if the var size along the given dimension dim is not divisible by chunks.
Args:
input (var) – the var to split.
chunks (int) – number of chunks to return.
dim (int) – dimension along which to split the var.
Example:

x = jt.random((10,3,3))
res = jt.chunk(x, 2, 0)
print(res[0].shape, res[1].shape)
[5,3,3,] [5,3,3,]
jittor.misc.cross(input, other, dim=-1)
Returns the cross product of vectors in dimension dim of input and other.
the cross product can be calculated by (a1,a2,a3) x (b1,b2,b3) = (a2b3-a3b2, a3b1-a1b3, a1b2-a2b1)
input and other must have the same size, and the size of their dim dimension should be 3.
If dim is not given, it defaults to the first dimension found with the size 3.
Args:
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor
dim (int, optional) – the dimension to take the cross-product in.
out (Tensor, optional) – the output tensor.
Example:

input = jt.random((6,3))
other = jt.random((6,3))
jt.cross(input, other, dim=1)
[[-0.42732686 0.6827885 -0.49206433]
[ 0.4651107 0.27036983 -0.5580432 ]
[-0.31933784 0.10543461 0.09676848]
[-0.58346975 -0.21417202 0.55176204]
[-0.40861478 0.01496297 0.38638002]
[ 0.18393655 -0.04907863 -0.17928357]]

jt.cross(input, other)
[[-0.42732686 0.6827885 -0.49206433]
[ 0.4651107 0.27036983 -0.5580432 ]
[-0.31933784 0.10543461 0.09676848]
[-0.58346975 -0.21417202 0.55176204]
[-0.40861478 0.01496297 0.38638002]
[ 0.18393655 -0.04907863 -0.17928357]]
jittor.misc.cumprod(x, dim=0)
jittor.misc.cumsum(x, dim=None)
x: [batch_size, N], jt.var
the cumulative sum of x
jittor.misc.cumsum_backward(np, data)
jittor.misc.cumsum_forward(np, data)
jittor.misc.deg2rad(x)
jittor.misc.diag(x, diagonal=0)
jittor.misc.expand(x, shape)
jittor.misc.flip(x, dim=0)
Reverse the order of a n-D var along given axis in dims.
Args:
input (var) – the input var.
dims (a list or tuple) – axis to flip on.
Example:

x = jt.array([[1,2,3,4]])
x.flip(1)
[[4 3 2 1]]
jittor.misc.gather(x, dim, index)
jittor.misc.hypot(a, b)
jittor.misc.index_fill_(x, dim, indexs, val)
Fills the elements of the input tensor with value val by selecting the indices in the order given in index.
Args:
x - the input tensor dim - dimension along which to index index – indices of input tensor to fill in val – the value to fill with
jittor.misc.kthvalue(input, k, dim=None, keepdim=False)
jittor.misc.log2(x)
jittor.misc.make_grid(x, nrow=8, padding=2, normalize=False, range=None, scale_each=False, pad_value=0)
jittor.misc.median(x, dim=None, keepdim=False)
jittor.misc.meshgrid(*tensors)
Take N tensors, each of which can be 1-dimensional vector, and create N n-dimensional grids, where the i th grid is defined by expanding the i th input over dimensions defined by other inputs.
jittor.misc.nms(dets, thresh)
dets jt.array [x1,y1,x2,y2,score] x(:,0)->x1,x(:,1)->y1,x(:,2)->x2,x(:,3)->y2,x(:,4)->score
jittor.misc.nonzero(x)
Return the index of the elements of input tensor which are not equal to zero.
jittor.misc.normalize(input, p=2, dim=1, eps=1e-12)
Performs L_p normalization of inputs over specified dimension.
Args:
input – input array of any shape
p (float) – the exponent value in the norm formulation. Default: 2
dim (int) – the dimension to reduce. Default: 1
eps (float) – small value to avoid division by zero. Default: 1e-12
Example:

x = jt.random((6,3))
[[0.18777736 0.9739261 0.77647036]
[0.13710196 0.27282116 0.30533272]
[0.7272278 0.5174613 0.9719775 ]
[0.02566639 0.37504175 0.32676998]
[0.0231761 0.5207773 0.70337296]
[0.58966476 0.49547017 0.36724383]]

jt.normalize(x)
[[0.14907198 0.7731768 0.61642134]
[0.31750825 0.63181424 0.7071063 ]
[0.5510936 0.39213243 0.736565 ]
[0.05152962 0.7529597 0.656046 ]
[0.02647221 0.59484214 0.80340654]
[0.6910677 0.58067477 0.4303977 ]]
jittor.misc.python_pass_warper(mod_func, args, kw)
jittor.misc.rad2deg(x)
jittor.misc.randperm(n, dtype=‘int64’)
jittor.misc.repeat(x, *shape)
Repeats this var along the specified dimensions.
Args:
x (var): jittor var.
shape (tuple): int or tuple. The number of times to repeat this var along each dimension.
Example:

x = jt.array([1, 2, 3])
x.repeat(4, 2)
[[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3]]

x.repeat(4, 2, 1).size()
[4, 2, 3,]
jittor.misc.repeat_interleave(x, repeats, dim=None)
jittor.misc.save_image(x, filepath, nrow: int = 8, padding: int = 2, normalize: bool = False, range=None, scale_each=False, pad_value=0, format=None)
jittor.misc.searchsorted(sorted, values, right=False)
Find the indices from the innermost dimension of sorted for each values.
Example:
sorted = jt.array([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]])
values = jt.array([[3, 6, 9], [3, 6, 9]])
ret = jt.searchsorted(sorted, values)
assert (ret == [[1, 3, 4], [1, 2, 4]]).all(), ret

ret = jt.searchsorted(sorted, values, right=True)
assert (ret == [[2, 3, 5], [1, 3, 4]]).all(), ret

sorted_1d = jt.array([1, 3, 5, 7, 9])
ret = jt.searchsorted(sorted_1d, values)
assert (ret == [[1, 3, 4], [1, 3, 4]]).all(), ret
jittor.misc.split(d, split_size, dim)
Splits the tensor into chunks. Each chunk is a view of the original tensor.
If split_size is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.
If split_size is a list, then tensor will be split into len(split_size) chunks with sizes in dim according to split_size_or_sections.
Args:
d (Tensor) – tensor to split.
split_size (int) or (list(int)) – size of a single chunk or list of sizes for each chunk
dim (int) – dimension along which to split the tensor.
jittor.misc.stack(x, dim=0)
Concatenates sequence of vars along a new dimension.
All vars need to be of the same size.
Args:
x (sequence of vars) – sequence of vars to concatenate.
dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated vars (inclusive).
Example:

a1 = jt.array([[1,2,3]])
a2 = jt.array([[4,5,6]])
jt.stack([a1, a2], 0)
[[[1 2 3]
[[4 5 6]]]
jittor.misc.t(x)
jittor.misc.tolist(x)
jittor.misc.topk(input, k, dim=None, largest=True, sorted=True)
jittor.misc.triu_(x, diagonal=0)
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
The upper triangular part of the matrix is defined as the elements on and above the diagonal.
Args:
x – the input tensor.
diagonal – the diagonal to consider,default =0
jittor.misc.unbind(x, dim=0)
Removes a var dimension.
Returns a tuple of all slices along a given dimension, already without it.
Args:
input (var) – the var to unbind
dim (int) – dimension to remove
Example:
a = jt.random((3,3)) b = jt.unbind(a, 0)
jittor.misc.unique(x)
Returns the unique elements of the input tensor.
Args:
x– the input tensor.
jittor.misc.view_as(x, y)

總結

以上是生活随笔為你收集整理的Jittor框架API的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

久久99日韩 | 国产成人一二片 | 精品久久久久久综合日本 | www一起操| 免费av小说 | 久草久热 | 久久女同性恋中文字幕 | 精品国产一区二区三区不卡 | 国产一级一片免费播放放a 一区二区三区国产欧美 | 欧美aa级 | 免费成人在线网站 | 久久看免费视频 | 国产专区精品 | 黄色的片子 | 欧美性久久久久久 | 91福利专区| 一区二区影院 | 日本女人在线观看 | 五月天六月婷 | 在线观看a视频 | 99色在线观看视频 | 欧美美女视频在线观看 | 婷婷丁香激情网 | 午夜精品成人一区二区三区 | 在线中文字幕av观看 | 在线综合 亚洲 欧美在线视频 | 婷婷免费在线视频 | 亚洲自拍av在线 | 国产又粗又长的视频 | 成人在线一区二区 | 久久影视中文字幕 | 日韩在线免费看 | 国产国产人免费人成免费视频 | 99久久夜色精品国产亚洲96 | 99久久99久久精品国产片 | 久久激情视频网 | av噜噜噜在线播放 | 特级黄色一级 | 中文字幕在线精品 | 久久xx视频 | 欧美在线视频不卡 | 欧美色888 | 九九导航 | 成人一区在线观看 | 97色涩 | 日韩va欧美va亚洲va久久 | 久久久黄视频 | 亚洲欧美日韩在线看 | 丝袜精品视频 | 亚洲最大av在线播放 | 欧美性免费 | 91传媒视频在线观看 | 久久天天躁 | 国产精品久久99综合免费观看尤物 | 色婷婷骚婷婷 | 97视频久久久 | 亚洲精品视频在线观看视频 | 日日夜夜天天久久 | 狠狠狠色丁香综合久久天下网 | 成人av电影在线观看 | 激情开心站 | 成人三级网址 | 97超碰人人澡人人爱学生 | 精品久久久999 | 亚洲一级电影 | 超碰97人人爱 | 午夜视频一区二区 | 亚洲精品在线免费观看视频 | 精品国产一区二区三区在线观看 | 九九交易行官网 | 久久99久久99 | 一级黄网 | 国产精品成人a免费观看 | 看片一区二区三区 | 一二三区高清 | www视频免费在线观看 | 国产不卡视频在线播放 | 美女黄濒 | 中文日韩在线 | 日本成人中文字幕在线观看 | 亚洲午夜精品久久久久久久久久久久 | 碰超人人 | 美女在线观看av | 国产一级二级在线 | 99精品国产99久久久久久97 | 精品国产成人在线影院 | www.久草.com| 亚洲激情在线 | 日本精品中文字幕 | 亚洲精品乱码久久久一二三 | 亚洲精品视频免费在线观看 | 2024av | 久亚洲| 成人小视频在线观看免费 | 97视频免费在线观看 | 国产亚洲免费观看 | 探花在线观看 | 国产香蕉久久 | 久久久精品 一区二区三区 国产99视频在线观看 | 视频在线播放国产 | 久久99久久99久久 | 91豆花在线观看 | 亚洲一一在线 | 亚洲天堂毛片 | 91麻豆精品国产自产 | 又长又大又黑又粗欧美 | 四虎成人免费观看 | 91在线视频免费播放 | 99久久综合狠狠综合久久 | 免费日韩一区二区三区 | 免费在线观看av网址 | 成人在线观看日韩 | 婷婷丁香色 | 东方av在 | 国产一区二区三区四区在线 | 九九免费在线观看 | 在线黄av| 久久久免费网站 | 成人av在线亚洲 | 成人a视频片观看免费 | 美女精品网站 | 国产一级精品在线观看 | 欧美日韩大片在线观看 | 国产精品免费在线播放 | 国产香蕉av | 91天天操 | 欧美另类高潮 | 最近2019年日本中文免费字幕 | 国产精品成人自拍 | www.黄色网.com | 日韩在线电影一区 | 精品国产理论 | 成人午夜电影免费在线观看 | 九九欧美 | 中文字幕色网站 | 97视频免费在线观看 | 日韩精品在线一区 | 操操色| 国产精品久久久久婷婷 | 天天干,天天射,天天操,天天摸 | 91福利国产在线观看 | 欧美一级视频在线观看 | 天天插一插 | 性色大片在线观看 | 久久国产视屏 | 免费看黄在线网站 | 最新99热 | 一区二区三区免费在线播放 | 亚洲精品91天天久久人人 | 国产成人精品久久久 | 91福利社在线观看 | 色多多视频在线观看 | 91超碰免费在线 | 国产91综合一区在线观看 | 亚洲欧美日韩国产一区二区 | 精品国产亚洲一区二区麻豆 | 久久影院中文字幕 | 欧美另类成人 | 国产伦精品一区二区三区四区视频 | 99久久精品视频免费 | 天天干天天碰 | 日韩在线观看视频一区二区三区 | 久久人人添人人爽添人人88v | 久草久热| 亚洲人在线7777777精品 | 91精品一区二区三区久久久久久 | 有没有在线观看av | 久久视频国产精品免费视频在线 | 亚洲婷婷在线视频 | 99热这里只有精品在线观看 | 亚洲久草网 | 欧洲一区二区在线观看 | 久久伊人精品一区二区三区 | 91精品秘密在线观看 | 黄网在线免费观看 | 精品国产一区二区三区久久久久久 | 成人在线免费视频观看 | 99精品国产成人一区二区 | 久久久免费 | 国产麻豆剧传媒免费观看 | 国产亚洲婷婷免费 | 狠狠色婷婷丁香六月 | 国产精品久久久久久婷婷天堂 | 一级a毛片高清视频 | 女人18精品一区二区三区 | 免费黄色av电影 | 深爱激情五月综合 | 狠狠色综合网站久久久久久久 | 免费在线色| 国产精品亚 | 91九色蝌蚪视频 | 2021国产精品视频 | 热热热热热色 | 狠狠操导航 | 国产精品久久久久久久毛片 | 91精品一区二区三区蜜臀 | 国产精品久久久久四虎 | 丁香六月天 | 黄www在线观看 | 精品一区二区视频 | 最近中文字幕在线 | 色婷婷九月 | 丁香久久| 日韩精品高清视频 | 天天天天色射综合 | 色九九影院 | 最近中文字幕大全 | 激情综合网五月婷婷 | 中文字幕在线资源 | 俺要去色综合狠狠 | 亚洲小视频在线观看 | 狠狠五月婷婷 | 最新国产在线 | 在线成人短视频 | 欧美日韩精品网站 | 九九热视频在线播放 | 免费视频一级片 | 9热精品| 中文字幕在线观看三区 | 欧美日韩中字 | 国产麻豆视频免费观看 | 精品久久久久久久久久久久久 | 美女很黄免费网站 | 伊人午夜视频 | 午夜性色 | 免费日韩一区二区三区 | 天天草天天草 | 人人澡人人爱 | 欧美激情亚洲综合 | avove黑丝| 亚洲精品久久在线 | 日日色综合| 精品国产1区 | 久久蜜臀一区二区三区av | 日韩黄色免费在线观看 | 男女精品久久 | 日韩精品一区电影 | 久草视频在线免费看 | 国产喷水在线 | 亚洲jizzjizz日本少妇 | 国产香蕉视频在线观看 | 天天av天天 | 久日视频 | 欧美日韩一区二区三区在线免费观看 | 日韩av在线免费播放 | 8x8x在线观看视频 | 国产做aⅴ在线视频播放 | 人人爽人人做 | 成人一区影院 | av丝袜在线 | 国产亚洲精品久久久久久无几年桃 | 日韩av午夜| 久久免费成人精品视频 | 欧美日韩国产伦理 | 久久精品99 | 欧美另类亚洲 | 九九热视频在线播放 | 成人黄色电影在线观看 | av电影在线免费观看 | 精品国产资源 | 97视频免费播放 | 在线精品视频免费播放 | 91丨九色丨蝌蚪丨对白 | 91视频在线观看大全 | 欧美亚洲成人xxx | 黄色大片免费播放 | 午夜av在线| 久久精品毛片基地 | 国产在线高清精品 | 欧美日韩网址 | 中文字幕人成人 | 国产精品第二页 | 久久久天堂 | 亚洲成人家庭影院 | 91成人短视频在线观看 | 免费网站黄色 | 九草视频在线 | а天堂中文最新一区二区三区 | 99理论片 | 日韩午夜一级片 | 亚洲精品在线观 | 99久久99久久精品免费 | 日日干干夜夜 | 亚洲精品午夜久久久久久久久久久 | 亚洲全部视频 | 久草视频在线新免费 | 91中文字幕在线观看 | 91精品一区二区在线观看 | 久久国产三级 | 国模精品在线 | 探花视频在线观看 | 五月婷婷中文网 | 国产区网址 | 免费观看黄色av | 久久免费公开视频 | 日韩在线观看a | 中午字幕在线观看 | 中文字幕欧美日韩va免费视频 | 久草在线免费资源 | 91精品国产成人 | 一级理论片在线观看 | 又黄又爽的免费高潮视频 | 精品99免费 | 麻豆国产精品永久免费视频 | 日韩精品一区二区三区三炮视频 | 婷婷丁香色综合狠狠色 | 人人干人人干人人干 | 日韩成人中文字幕 | 在线看岛国av | 亚洲一区二区高潮无套美女 | 久久久黄视频 | 中文字幕亚洲在线观看 | 一区二区三区www | 久久激情视频 久久 | 福利视频精品 | 免费看色视频 | 免费看黄在线观看 | 亚洲精品在线观看不卡 | 探花视频在线版播放免费观看 | 99久久99久国产黄毛片 | 丁香六月五月婷婷 | 探花视频在线观看免费 | 欧美激情一区不卡 | 深夜福利视频在线观看 | 一级黄色在线视频 | 在线免费观看麻豆 | 欧美日韩国产在线 | 亚洲欧美国产精品 | 福利视频精品 | 国产成人一二三 | 亚洲精品国产精品久久99热 | 日本不卡一区二区三区在线观看 | 欧美一级黄色视屏 | 精品久久久久久久久久久久久久久久 | av电影一区二区三区 | 久久精品视频免费 | 在线黄色国产 | 99热精品视 | 久久精品看片 | 久一在线 | 深夜免费福利视频 | 伊人伊成久久人综合网小说 | 国产成人精品a | 国产中文在线字幕 | 国产亚洲精品中文字幕 | 国产中文字幕一区二区 | 日韩欧美高清不卡 | 99久久精品国产欧美主题曲 | 久久99精品久久久久久清纯直播 | 丁香 婷婷 激情 | 日韩一片| 久久久精品一区二区三区 | 久久久精品欧美一区二区免费 | 亚洲天天 | 美女性爽视频国产免费app | www.av免费| 国产伦精品一区二区三区免费 | 精品一区电影 | 亚洲精品伦理在线 | 日本系列中文字幕 | 四虎成人网 | 国产亚州av | 久久久五月天 | 久影院 | 992tv在线观看 | 麻花豆传媒mv在线观看网站 | 日日日视频 | 日韩精品一区二区不卡 | 精品视频免费观看 | 亚洲国产精品久久久 | 国产一区在线视频播放 | 激情视频国产 | 久久欧美综合 | 国产黄网在线 | 欧美巨乳网| 97超碰在线久草超碰在线观看 | www.亚洲精品视频 | 日本最新一区二区三区 | 久久久久成人精品免费播放动漫 | 日韩激情av在线 | 免费日韩一区 | 视频99爱 | 91高清免费 | 午夜精品一区二区三区免费 | 国产小视频国产精品 | 麻豆综合网 | 亚洲码国产日韩欧美高潮在线播放 | 国产精品97 | 亚洲美女视频在线观看 | 91日韩国产| av视屏在线播放 | 69视频网站| 久久婷五月 | 成人精品视频久久久久 | 精品一二三区视频 | 天堂va在线高清一区 | 久久69精品久久久久久久电影好 | 国产视频不卡 | 特级西西444www大胆高清无视频 | www色网站| 国产精品久久久久一区二区三区 | 久久久国产视频 | 日本久久91| 久久久久国产免费免费 | 精品亚洲欧美一区 | 日韩欧美精品一区 | 国产综合香蕉五月婷在线 | 欧美性色综合 | 在线观看国产91 | 成人资源网 | 色综合天天 | 激情导航 | 日日干网| 久久精品欧美一区 | 日本三级香港三级人妇99 | 人人插人人干 | 国产亚洲精品久久久久久无几年桃 | 狠狠插狠狠操 | 在线免费观看黄色 | 国产成人久久久77777 | 久久精品人人做人人综合老师 | 成片免费观看视频 | 一区二区三区在线影院 | 在线观看精品一区 | 亚洲精品国产精品乱码不99热 | 成人精品一区二区三区中文字幕 | av888av.com| 九色琪琪久久综合网天天 | 91视频在线免费看 | 亚洲a资源 | 4438全国亚洲精品在线观看视频 | 成人av高清在线观看 | 午夜精品视频一区二区三区在线看 | www国产亚洲精品久久麻豆 | 午夜视频在线观看一区二区三区 | 99精品视频免费在线观看 | 国产高清视频在线播放一区 | a电影免费看 | 日韩国产欧美在线播放 | 中文字幕在线播放日韩 | 亚洲欧美日本A∨在线观看 青青河边草观看完整版高清 | 狠狠干五月天 | 夜夜视频资源 | 一区二区三区免费在线观看视频 | 亚洲一级片在线观看 | 亚洲国产中文字幕在线 | 久久视频一区二区 | 久久综合欧美精品亚洲一区 | 日韩在线观看视频免费 | 日韩 国产| 99精品视频在线观看播放 | a天堂在线看| 少妇性色午夜淫片aaaze | 中文字幕黄色av | 免费看v片网站 | 欧美91精品国产自产 | av在线之家电影网站 | 亚洲欧美国产精品va在线观看 | 日韩亚洲在线观看 | 四虎影视成人精品国库在线观看 | 在线成人中文字幕 | 97在线视频观看 | av先锋影音少妇 | 久草精品视频在线看网站免费 | av免费看网站 | 超碰人人射 | 午夜男人影院 | 中字幕视频在线永久在线观看免费 | 久久视频99 | 视频成人永久免费视频 | 国产在线一区二区三区播放 | 成人一级在线 | 毛片播放网站 | 深夜免费小视频 | 三级黄色a | 亚洲精品高清视频 | 午夜性盈盈| 欧美 日韩 国产 中文字幕 | 亚洲黄色区 | 99国内精品久久久久久久 | 国产精品正在播放 | 亚洲综合在线观看视频 | 久久精品99国产精品酒店日本 | 亚洲狠狠干 | 天天操欧美 | 91久久一区二区 | 日韩中文字幕视频在线 | 国产视频 亚洲视频 | 日韩三级视频在线观看 | 国产精品岛国久久久久久久久红粉 | 国产91精品看黄网站在线观看动漫 | 蜜臀一区二区三区精品免费视频 | 三上悠亚在线免费 | 99热都是精品 | 成人一区在线观看 | 亚洲日日日 | 国产一级免费观看 | 香蕉视频18| 久久综合九色欧美综合狠狠 | www.久久成人 | 国产美女在线精品免费观看 | 久久久网页 | 美女视频黄是免费的 | 亚洲禁18久人片 | 欧美日韩一级久久久久久免费看 | 五月婷婷综合激情网 | 国内一区二区视频 | 中文在线字幕观看电影 | 亚洲精品影视 | 91麻豆操 | 国产精品视频线看 | 日韩视频免费播放 | 999ZYZ玖玖资源站永久 | 色综合天天色 | 国内外激情视频 | 久久久国产精品电影 | 麻豆91精品91久久久 | 伊人看片 | 69国产成人综合久久精品欧美 | 91精品视频一区 | 国产一卡久久电影永久 | 丝袜+亚洲+另类+欧美+变态 | 97在线精品国自产拍中文 | 日本一区二区三区免费观看 | 欧美日本在线视频 | 99免费视频 | 亚洲国产成人高清精品 | 久久6精品 | 久久久精品亚洲 | 免费一级片观看 | 婷婷色伊人 | 天天操天天射天天 | 日韩av网站在线播放 | 日韩精品无 | 欧美一级电影免费观看 | 亚洲综合激情网 | 97碰碰精品嫩模在线播放 | 色婷婷狠狠五月综合天色拍 | 综合网天天 | 在线视频app | 欧美一级视频免费看 | 欧美激情综合五月色丁香 | 嫩草av在线 | 97精品一区 | 免费日韩一区二区 | 日本精品久久久久中文字幕 | 国产一线二线三线在线观看 | 91亚洲精品国偷拍 | 久久伊人色综合 | 韩日成人av| 在线精品视频免费播放 | 国产看片网站 | 在线免费色视频 | 中文字幕在线观看免费高清电影 | 日韩欧美视频一区 | 337p日本大胆噜噜噜噜 | 人人射人人插 | 中文字幕一区二区三区四区在线视频 | 日本久久91 | 精品国产一区二区三区四区在线观看 | 国产不卡在线播放 | 91精品区 | 中文字幕在线视频免费播放 | 亚洲人人射 | 日韩激情片在线观看 | 极品久久久 | 久久精品视频国产 | 人成在线免费视频 | 国产精品国产三级国产 | 久久99国产精品二区护士 | 欧美日韩视频在线一区 | 五月婷婷视频 | 亚洲免费在线视频 | 日韩精品免费在线播放 | 久操视频在线观看 | 五月激情六月丁香 | 国产精品av在线免费观看 | 欧美一区二区在线刺激视频 | 天天综合网入口 | 四虎免费在线观看 | 久99久精品| 久草在线| 激情伊人五月天久久综合 | av黄色av| 韩国一区二区三区在线观看 | 啪啪动态视频 | 在线观看91精品视频 | 黄色片视频在线观看 | 特级西西444www大精品视频免费看 | 中文字幕亚洲在线观看 | 国产亚洲精品久久久久久移动网络 | 欧美激情操 | 久久精彩视频 | 日韩精品免费在线视频 | 精品国产一区二区三区四区vr | 久久99久久99精品免观看粉嫩 | 日韩二区在线播放 | 丁香婷婷综合五月 | 日韩特黄av | www国产亚洲精品久久麻豆 | 麻豆视屏 | 成年人免费在线播放 | 91高清完整版在线观看 | 99精品国产免费久久久久久下载 | 中文一二区 | www.夜夜骑.com | 久久激情五月丁香伊人 | 五月天精品视频 | 国产在线不卡 | 美女视频黄免费 | 欧美精选一区二区三区 | 国内亚洲精品 | 日韩电影一区二区三区在线观看 | 伊人伊成久久人综合网站 | 最近免费观看的电影完整版 | 99re6热在线精品视频 | 狠狠躁日日躁夜夜躁av | www黄在线| 西西444www大胆无视频 | 国产免费一区二区三区最新6 | 精品久久一| 在线v片免费观看视频 | 国产精品国产精品 | 国内精品在线看 | 日本久久成人 | 久久在线观看视频 | 国产资源免费在线观看 | 亚洲精品videossex少妇 | 日韩精品播放 | 日韩一级理论片 | 久久看毛片 | 亚洲成人999 | 天天综合色天天综合 | 久久精品国产第一区二区三区 | 91九色在线观看视频 | 韩国av免费在线观看 | 天天艹日日干 | 免费在线色 | 欧美做受高潮1 | 国产一级视频 | 丁香婷婷在线 | 亚洲狠狠丁香婷婷综合久久久 | 日韩成人在线一区二区 | 婷婷色网视频在线播放 | 亚洲污视频 | 国产精品一区二区无线 | 久久在线免费观看视频 | 色橹橹欧美在线观看视频高清 | 欧美综合色在线图区 | 精品久久久久免费极品大片 | 午夜av在线电影 | 黄色精品在线看 | 欧美色图亚洲图片 | 亚洲精品玖玖玖av在线看 | 蜜臀av麻豆 | 国产在线视频资源 | 久久免费看a级毛毛片 | 亚洲精品久久久久www | 狠狠躁日日躁狂躁夜夜躁 | 夜夜骑天天操 | 欧美日韩不卡一区二区 | 日本在线视频一区二区三区 | av免费看电影 | www.啪啪.com| 亚洲国产网址 | 91成人免费电影 | 免费欧美高清视频 | 亚洲色视频 | 天无日天天操天天干 | 日韩视频欧美视频 | 麻豆视频在线免费看 | 在线国产一区二区三区 | 九9热这里真品2 | 日韩中文免费视频 | 免费在线观看成人 | 狠狠色丁香婷婷综合久久片 | 91av大全| 久久亚洲日本 | 亚洲高清视频在线播放 | 亚洲精品综合欧美二区变态 | 久久精精品视频 | 日韩激情三级 | 久久久久久久久亚洲精品 | 成人一级片在线观看 | 人人草在线观看 | 日韩精品在线观看av | 在线视频日韩精品 | av免费在线看网站 | 久久免费观看少妇a级毛片 久久久久成人免费 | 亚洲涩涩网 | 欧美一级免费在线 | 久久视频在线观看中文字幕 | 狠狠干天天射 | 毛片网站在线 | 韩国一区二区在线观看 | 不卡视频在线 | 日韩在线观看视频在线 | 美国av大片| 国产一区二区三区免费观看视频 | 黄a在线观看| 四虎国产精品免费 | 精品视频免费观看 | 久草视频手机在线 | 91日韩在线专区 | 黄a在线| 天天天综合 | 亚洲一级片在线观看 | www夜夜 | 蜜桃视频成人在线观看 | 摸阴视频 | 深爱激情久久 | 国产精品女人网站 | 国产视频精品久久 | 国产99在线免费 | 国产成人av| 丁香六月网 | 97超碰伊人 | 日韩黄色免费看 | 免费看成人a| 超级碰碰碰碰 | 日韩欧美成 | 亚洲电影网站 | 精品国产一区二区三区久久久蜜月 | 中文字幕在线视频第一页 | 福利二区视频 | 中文字幕网站 | 亚洲精品字幕在线观看 | 日韩免费福利 | 美女av在线免费 | 99爱爱| 国产视频日本 | 在线观看日本韩国电影 | 国产精选在线 | www.天天射.com| 成人欧美一区二区三区在线观看 | a v在线观看 | a级片久久久 | 欧美日韩免费网站 | 欧美日本一区 | 国产999视频 | 色吊丝在线永久观看最新版本 | 久久国产电影院 | 国产精品免费看久久久8精臀av | 日韩两性视频 | 色婷婷www| 久久久久网址 | 免费在线91 | www.亚洲精品在线 | 亚洲男人天堂2018 | 免费能看的av | 在线观看国产福利片 | 国内久久精品 | 丁香国产视频 | 精品视频97| 中文字幕在线有码 | 2019中文在线观看 | 亚洲成成品网站 | 日韩特黄一级欧美毛片特黄 | 亚洲视频456 | 黄色综合 | 成人三级黄色 | 在线看毛片网站 | 粉嫩aⅴ一区二区三区 | 亚洲人成人99网站 | 蜜臀av性久久久久av蜜臀妖精 | 亚洲黄在线观看 | 又黄又刺激视频 | 久久久国产精品一区二区三区 | www.香蕉视频在线观看 | 操操日日 | www国产亚洲精品久久麻豆 | 欧美日韩高清国产 | 中文字幕av在线不卡 | 伊人久久精品久久亚洲一区 | 日韩高清在线看 | 毛片视频网址 | 国产精品成人自产拍在线观看 | 欧美日本三级 | 一区中文字幕在线观看 | 99精品毛片 | 天天艹天天 | 日韩美av在线 | 成人福利在线 | 免费观看的黄色片 | 日韩手机视频 | 亚洲经典在线 | 99久久久国产精品 | 99精品久久99久久久久 | 欧美一区二区精美视频 | 91久久爱热色涩涩 | 亚洲精品成人av在线 | 成人一区二区三区中文字幕 | 中文字幕一区在线观看视频 | 久久黄色影视 | 欧美影片 | 国产高清av在线播放 | 97视频一区 | 99草在线视频 | 成人小电影在线看 | 久久99精品久久久久蜜臀 | 青青河边草免费 | 国产亚洲精品久久网站 | 日韩免费在线一区 | 日韩专区av| 蜜臀aⅴ国产精品久久久国产 | 蜜臀精品久久久久久蜜臀 | 天天干天天草天天爽 | 欧美日韩国产色综合一二三四 | 正在播放 国产精品 | 欧美性大战 | 色综合色综合久久综合频道88 | 久久人人爽人人爽人人片av软件 | 久久这里只有精品1 | 免费观看十分钟 | 欧美性色综合网 | 欧美一级激情 | 精品久久久久久久久久久久久久久久久久 | 91观看视频 | 91 在线视频 | 美女网站在线观看 | 黄色免费网 | 午夜久久视频 | 国产资源网 | 中国一级特黄毛片大片久久 | 日韩一区二区三区在线观看 | 国产91aaa| 日韩av伦理片 | 亚洲一区免费在线 | 高清一区二区三区 | 国产精品久久久久一区二区三区共 | 波多野结衣电影一区二区三区 | 国产一区二区三区高清播放 | 免费观看mv大片高清 | 久草在线免费色站 | 天天插天天干 | 玖玖玖精品 | 午夜精品成人一区二区三区 | 中文字幕国产一区二区 | 日本黄区免费视频观看 | 久久亚洲美女 | 国产精品一区二区在线 | 精品久久久免费 | 欧美精品久久久 | 丁香婷婷综合色啪 | 天天干天天射天天爽 | 波多野结衣视频一区二区三区 | 亚洲精品天天 | 亚洲精品在线观看的 | 蜜桃传媒一区二区 | 99久久久免费视频 | 波多野结衣在线观看一区 | 综合久久精品 | 亚洲1区 在线 | 国精产品999国精产 久久久久 | 亚洲精品国产高清 | zzijzzij亚洲日本少妇熟睡 | 中国成人一区 | 国产91亚洲 | 欧美精品久久久久久久 | 免费看一级特黄a大片 | 久久久久亚洲最大xxxx | 在线小视频国产 | 97在线公开视频 | 久久久久久久久免费视频 | 韩国av免费观看 | 奇米网网址 | 在线观看韩日电影免费 | 一区二区视频在线播放 | 菠萝菠萝在线精品视频 | 欧美人人爱| 丁香婷婷久久久综合精品国产 | 中文字幕二区 | 男女免费视频观看 | 久久久久亚洲国产 | 97人人添人澡人人爽超碰动图 | 亚洲精品乱码久久久久久蜜桃不爽 | 日韩成人高清在线 | 国产成人在线网站 | www视频在线观看 | 99热这里只有精品久久 | 美腿丝袜一区二区三区 | 91精品中文字幕 | 国产在线成人 | av电影免费 | 五月天网站在线 | 欧美日韩在线网站 | 麻豆传媒视频在线播放 | 91精品啪在线观看国产线免费 | 视频在线观看入口黄最新永久免费国产 | 一区二区三区在线免费观看视频 | 色一级片 | 91一区啪爱嗯打偷拍欧美 | 狠狠色狠狠色合久久伊人 | 夜夜夜影院 | 精品国产一区二区三区久久 | 激情 婷婷| 婷婷国产v亚洲v欧美久久 | 日韩精品一区在线播放 | 免费看wwwwwwwwwww的视频 久久久久久99精品 91中文字幕视频 | 99热官网 | 国产精品第一视频 | 国产精品av久久久久久无 | 久久成视频 | 日韩欧美综合视频 | 国产99在线免费 | 五月婷婷综合色拍 | 亚洲精品在线免费播放 | 亚洲 欧美 另类人妖 | 在线观看黄色免费视频 | 亚洲 欧美 变态 国产 另类 | 中文资源在线观看 | 波多野结衣在线视频免费观看 | 成人黄色大片网站 | 在线视频麻豆 | 欧美一级日韩三级 | 亚洲国产精品va在线 | 国产又粗又猛又爽又黄的视频先 | 午夜视频在线观看一区 | 日本久久久影视 | 久草成人在线 | 午夜色婷婷 | 国产精品久久久久国产精品日日 | 人人玩人人爽 | 午夜视频色 | 国产高清在线观看av | 日p在线观看 | 国产黄在线播放 | 欧美99精品| 久久久综合| 久久激情视频免费观看 | 国产一区二区三区网站 | 欧美日韩国产伦理 | 久久久国产电影 | 亚洲最新av网站 | 国产精品影音先锋 | 黄污网站在线观看 | 久久97久久97精品免视看 | 一区二区三区免费在线观看视频 | 日韩免费三级 | 日韩综合精品 | 精品国产日本 | 在线观看视频黄 | 亚洲一区黄色 | 天天综合精品 | 中文字幕国产一区二区 | 国产在线视频在线观看 | 亚洲成人动漫在线观看 | 久久99热久久99精品 | 男女激情片在线观看 | 欧美aa一级片 | 日韩av有码在线 | 久久免费毛片视频 | 夜夜骑日日操 | 在线观看av的网站 | 欧美另类网站 | 婷婷深爱激情 | 久久久久久久久久久国产精品 | 日本少妇高清做爰视频 | 久久午夜羞羞影院 | 91九色在线观看 | 国产精品综合久久久久久 | 国产高清亚洲 | 激情小说久久 | 国产又粗又长又硬免费视频 | 99精品热视频 | 成人avav| 日韩在线免费小视频 | 欧美性黑人 | 国产中文视频 | 天天艹 | 91免费看黄色 | 免费在线观看国产精品 | 久久99精品国产99久久6尤 | 国产在线高清视频 | 欧美一区三区四区 | 又黄又刺激的网站 | 中文在线资源 | 91精品办公室少妇高潮对白 | 亚洲国产成人精品电影在线观看 | 国产精品久久久久四虎 | 国产超碰在线观看 | 91精品资源 | 91福利区一区二区三区 | 欧美性网站 | 国产精品 国内视频 | 色综合久久五月天 | 性色av香蕉一区二区 | 国产午夜精品视频 | av天天在线观看 | 国产午夜不卡 | 91香蕉视频色版 | 亚洲天堂精品视频在线观看 | 五月婷婷中文 | 免费高清无人区完整版 | 欧美日韩国产综合一区二区 | 亚洲精品国产麻豆 | 亚洲网站在线看 | 成人免费观看电影 |