“Python TensorFlow Basics”版本间的差异

来自iCenter Wiki
跳转至: 导航搜索
tf.argmax
第229行: 第229行:
  
 
[1 1 1]
 
[1 1 1]
 +
 +
=TensorFlow 中的线性代数运算=
 +
 +
TensorFlow专门设计了线性代数( Linear Algebra)运算加速器 XLA (Accelerated Linear Algebra)
 +
 +
矩阵和向量的表示 matrix(matrices) and vector(vectors)
 +
 +
矩阵加法和乘法 matrix addition and scalar multiplication
 +
 +
矩阵和向量乘 Matrix and vector multiplication
 +
 +
矩阵和矩阵乘
 +
matrix and matrix multiplication
 +
 +
矩阵取逆矩阵和矩阵转置运算 matrix inverse/transpose
 +
 +
 +
(https://www.tensorflow.org/api_docs/python/tf/matrix_inverse)
 +
 +
(https://www.tensorflow.org/api_guides/python/math_ops)
 +
  
 
=TensorFlow Saver=  
 
=TensorFlow Saver=  
第248行: 第269行:
 
  import os
 
  import os
 
  os.path.realpath('.')
 
  os.path.realpath('.')
 
 
 
=线性代数 Linear Algebra=
 
 
matrix(matrices) and vector(vectors)
 
 
addition and scalar multiplication
 
 
Matrix and vector multiplication
 
 
matrix and matrix multiplication
 
 
matrix inverse/transpose
 
 
(https://www.tensorflow.org/api_docs/python/tf/matrix_inverse)
 
 
(https://www.tensorflow.org/api_guides/python/math_ops)
 

2017年10月16日 (一) 03:50的版本

Python

IPython

  • IPython

命令行输入:$ ipython notebook

或者使用:$jupyter notebook

Python Math library

https://docs.python.org/3.6/library/math.html

Numpy

[x] python-numpy, http://cs231n.github.io/python-numpy-tutorial/.

matplotlib

(matplotlib/pyplot)展示WaveForm AudioPlot

import matplotlib.pyplot as plt
from wavReader import readWav
rate, data =readWav('c:\\Users\\icenter\\Documents\\a.wav')
plt.plot(data)
plt.show()

激活函数(Activation Function)

ReLU

ReLU函数是Softplus函数的钝化版本。

import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
relu = lambda x: np.maximum(x, 0)
plt.plot(x, relu(x), color='blue', lw=2)
plt.show()

Softplus

Softplus函数是ReLu函数的软化版本。

import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
softplus = lambda x: np.log(1.0 + np.exp(x))
plt.plot(x, softplus(x), color='blue', lw=2)
plt.show()

sigmoid

http://matplotlib.org/)

import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
sigmoid = lambda x: 1 / (1 + np.exp(-x))
plt.plot(x,sigmoid(x), color='red', lw=2)
plt.show()

softmax函数

import math
import matplotlib.pyplot as plt
w=[1,2,3,4,5,6,7,8,9]
w_exp=[math.exp(i) for i in w]
print(w_exp)
sum_w_exp = sum(w_exp)
softmax = [round(i / sum_w_exp, 3) for i in w_exp]
print(softmax)
print(sum(softmax))
plt.plot(softmax)
plt.show()

Tensorflow

TensorFlow:opensource machine intelligence libraries

TensorFlow参考了Python中的Numpy库的很多概念和函数,如Arrays的概念、Shape的概念、reduce_sum()函数,reshape()函数,argmax()函数等等。

TensorFlow中的Tensor可以理解为就是Numpy中Array的概念。当然TensorFlow和Numpy的定位是不同的。

TensorFlow的变量和运算

import tensorflow as tf
import numpy
A=tf.Variable([[1,2],[3,4]], dtype=tf.float32)
A.get_shape()

TensorShape([Dimension(2), Dimension(2)])

B=tf.Variable([[5,6],[7,8]],dtype=tf.float32)
B.get_shape()

TensorShape([Dimension(2), Dimension(2)])

C=tf.matmul(A,B)
tf.global_variables()

[<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32_ref>, <tf.Variable 'Variable_1:0' shape=(2, 2) dtype=float32_ref>]

init = tf.global_variables_initializer()
sess =tf.Session()
sess.run(init)

print(sess.run(C))

[[ 19. 22.] [ 43. 50.]]

print(sess.run(tf.reduce_sum(C, 0)))

[ 62. 72.]

print(sess.run(tf.reduce_sum(C, 1)))

[ 41. 93.]

TensorFlow 的表示

Sigmoid 函数/SOFTMAX 函数/SOFTPLUS函数

https://www.tensorflow.org/api_guides/python/nn)

R=tf.Variable([1.,.2,3],dtype=tf.float32)
T=tf.Variable([True, True, False, False], dtype=tf.bool)
U=tf.Variable([True, True, True, False], dtype=tf.bool)
init=tf.global_variables_initializer()
sess.run(init) //sess.run(tf.global_variables_initializer())
print(sess.run(tf.nn.sigmoid(R, name="last")))
print(sess.run(tf.nn.softmax(R,dim=-1, name="last")))
print(sess.run(tf.nn.softplus(R,name="last")))
print(sess.run(tf.nn.relu(R,name="XXX")))
print(sess.run(tf.cast(T, tf.int32)))
print(sess.run(tf.cast(T, tf.float32)))
print(sess.run(tf.reduce_mean(tf.cast(T, tf.float32))))

Tensorflow的reshape函数

tf.reshape()
Y=tf.Variable([[1,2,3],[4,5,6]],dtype=tf.float32)
sess.run(tf.global_variables_initializer())
print(sess.run(Y))

[[1 2 3] [4 5 6]]

print(sess.run(tf.reshape(Y,[6])))

[ 1. 2. 3. 4. 5. 6.]


print(sess.run(tf.reshape(Y,[3,2])))

array([[1 2] [3 4] [5 6]], dtype=float32)

print(sess.run(tf.reshape(Y,[-1])))

array([1 2 3 4 5 6],dtype=float32)

print(sess.run(tf.reshape(Y, [-1,1])))

array([[ 1.], [ 2.], [ 3.], [ 4.], [ 5.], [ 6.]], dtype=float32)

tf.argmax

tf.argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis.

print(sess.run(tf.argmax(Y,1)))

[2 2]

print(sess.run(tf.argmax(Y,0)))

[1 1 1]

TensorFlow 中的线性代数运算

TensorFlow专门设计了线性代数( Linear Algebra)运算加速器 XLA (Accelerated Linear Algebra)

矩阵和向量的表示 matrix(matrices) and vector(vectors)

矩阵加法和乘法 matrix addition and scalar multiplication

矩阵和向量乘 Matrix and vector multiplication

矩阵和矩阵乘 matrix and matrix multiplication

矩阵取逆矩阵和矩阵转置运算 matrix inverse/transpose


(https://www.tensorflow.org/api_docs/python/tf/matrix_inverse)

(https://www.tensorflow.org/api_guides/python/math_ops)


TensorFlow Saver

  1. Add ops to save and restore all the variables.
saver = tf.train.Saver()
 # Save the variables to disk.
 save_path = saver.save(sess, "/tmp/model.ckpt")
 print("Model saved in file: %s" % save_path)


 saver.restore(sess, "/tmp/model.ckpt")
 print("Model restored.")
 # Do some work with the model
  1. current file path
import os
os.path.realpath('.')