pytorch 的使用


pytorch 的使用

pytorch 常用模块

https://pytorch.org/docs/stable/nn.html

LSTM

1
2
3
4
5
6
7
8
9
10
# parameters:
input_size: 输入x的维度
hidden_size: 隐藏层的维度
num_layers: LSTM的层数
bias: 是否使用偏置值
batch_first: if true 则输入x的shape为:(bs, seq_len, x_dim)
if false 则x为: (seq_len, bs, x_dim)
dropout: 默认为零
bidirectional: if true 则为一个biLSTM
proj_size: LSTM输出的维度默认为hidden_size,如果指定了proj_size则输出的维度变为proj_size
1
2
3
4
5
6
7
8
9
# input
x:
(bs, seq_len, input_size) if batch_first==True
(seq_len, bs, input_size) if batch_first==False
h_0:
(D*num_layers, bs, hidden_size) D=2 for biTSLM, D=1 for LSTM
(D*num_layers, bs, proj_size) if 指定了 proj_size
c_0:
(D*num_layers, bs, hidden_size) D=2 for biTSLM, D=1 for LSTM
1
2
3
4
5
6
7
8
9
# output: 
o:
(bs, seq_len, D*hidden_size) if batch_first==True
(bs, seq_len, D*proj_size) if 指定了 proj_size
h_n:
(D*num_layers, bs, hidden_size) D=2 for biTSLM, D=1 for LSTM
(D*num_layers, bs, proj_size) if 指定了 proj_size
c_n:
(D*num_layers, bs, hidden_size) D=2 for biTSLM, D=1 for LSTM

LSTM CELL

1
2
3
4
5
6
7
# parameters:
input_size:
输入x的维度
hidden_size:
隐藏层的维度
bias:
是否有偏置值
1
2
3
4
5
6
7
# inputs:
x:
(bs, input_size)
h_0:
(bs, hidden_size)
c_0:
(bs, hidden_size)
1
2
3
4
5
# outputs:
h_1:
(bs, hidden_size)
c_1:
(bs, hidden_size)

EMBEDDING

1
2
3
4
5
# parameters:
num_embeddings:
int, embedding 字典的大小,比如对有10000个字的字典做embedding,则num_embedding=10000
embedding_dim:
int, embedding后得到的vector的大小
1
2
# input:
(*) 任意size的input,其中每一个值代表在字典中某个元素的索引
1
2
# output:
(*, embedding_dim)

文章作者: 崔文耀
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 崔文耀 !
  目录