文章目錄
- 1. TFRecord 格式存儲
- 2. tf.function 高性能
- 3. tf.TensorArray 支持計算圖特性
- 4. tf.config 分配GPU
學習于:簡單粗暴 TensorFlow 2
1. TFRecord 格式存儲
import random
import os
import tensorflow
as tf
train_data_dir
= "./dogs-vs-cats/train/"
test_data_dir
= "./dogs-vs-cats/test/"
file_dir
= [train_data_dir
+ filename
for filename
in os
.listdir
(train_data_dir
)]
labels
= [0 if filename
[0] == 'c' else 1for filename
in os
.listdir
(train_data_dir
)]
f_l
= list(zip(file_dir
, labels
))
random
.shuffle
(f_l
)
file_dir
, labels
= zip(*f_l
)
valid_ratio
= 0.1
idx
= int((1 - valid_ratio
) * len(file_dir
))
train_files
, valid_files
= file_dir
[:idx
], file_dir
[idx
:]
train_labels
, valid_labels
= labels
[:idx
], labels
[idx
:]
train_tfrecord_file
= "./dogs-vs-cats/train.tfrecords"
valid_tfrecord_file
= "./dogs-vs-cats/valid.tfrecords"
with tf
.io
.TFRecordWriter
(path
=train_tfrecord_file
) as writer
:for filename
, label
in zip(train_files
, train_labels
):img
= open(filename
, 'rb').read
() feature
= {'image': tf
.train
.Feature
(bytes_list
=tf
.train
.BytesList
(value
=[img
])),'label': tf
.train
.Feature
(int64_list
=tf
.train
.Int64List
(value
=[label
]))}example
= tf
.train
.Example
(features
=tf
.train
.Features
(feature
=feature
))writer
.write
(example
.SerializeToString
())
raw_train_dataset
= tf
.data
.TFRecordDataset
(train_tfrecord_file
)
feature_description
= {'image': tf
.io
.FixedLenFeature
(shape
=[], dtype
=tf
.string
),'label': tf
.io
.FixedLenFeature
([], tf
.int64
),
}def _parse_example(example_string
): feature_dict
= tf
.io
.parse_single_example
(example_string
, feature_description
)feature_dict
['image'] = tf
.io
.decode_jpeg
(feature_dict
['image'])return feature_dict
['image'], feature_dict
['label']
train_dataset
= raw_train_dataset
.map(_parse_example
)import matplotlib
.pyplot
as plt
for img
, label
in train_dataset
:plt
.title
('cat' if label
==0 else 'dog')plt
.imshow
(img
.numpy
())plt
.show
()
2. tf.function 高性能
- TF 2.0 默認 即時執行模式(Eager Execution),靈活、易調試
- 追求高性能、部署模型時,使用圖執行模式(Graph Execution)
- TF 2.0 的 tf.function 模塊 + AutoGraph 機制,使用 @tf.function 修飾符,就可以將模型以圖執行模式運行
注意:@tf.function修飾的函數內,盡量只用 tf 的內置函數,變量只用 tensor、numpy 數組
- 被修飾的函數 F(X, y) 可以調用get_concrete_function 方法,獲得計算圖
graph
= F
.get_concrete_function
(X
, y
)
3. tf.TensorArray 支持計算圖特性
- tf.TensorArray 支持計算圖模式的 動態數組
arr
= tf
.TensorArray
(dtype
=tf
.int64
, size
=1, dynamic_size
=True)
arr
= arr
.write
(index
=1, value
=512)
for i
in range(arr
.size
()):print(arr
.read
(i
))
4. tf.config 分配GPU
- 列出設備 list_physical_devices
print('---device----')
gpus
= tf
.config
.list_physical_devices
(device_type
='GPU')
cpus
= tf
.config
.list_physical_devices
(device_type
='CPU')
print(gpus
, "\n", cpus
)
[PhysicalDevice
(name
='/physical_device:GPU:0', device_type
='GPU')] [PhysicalDevice
(name
='/physical_device:CPU:0', device_type
='CPU')]
- 設置哪些可見 set_visible_devices
tf
.config
.set_visible_devices
(devices
=gpus
[0:2], device_type
='GPU')
或者
- 終端輸入 export CUDA_VISIBLE_DEVICES=2,3
- or 代碼中加入
import os
os
.environ
['CUDA_VISIBLE_DEVICES'] = "2,3"
指定程序 只在 顯卡 2, 3 上運行
gpus
= tf
.config
.list_physical_devices
(device_type
='GPU')
for gpu
in gpus
:tf
.config
.experimental
.set_memory_growth
(device
=gpu
, enable
=True)
gpus
= tf
.config
.list_physical_devices
(device_type
='GPU')
tf
.config
.set_logical_device_configuration
(gpus
[0],[tf
.config
.LogicalDeviceConfiguration
(memory_limit
=1024)])
在單GPU電腦上,寫 多GPU 代碼,可以模擬實現
gpus
= tf
.config
.list_physical_devices
('GPU')
tf
.config
.set_logical_device_configuration
(gpus
[0],[tf
.config
.LogicalDeviceConfiguration
(memory_limit
=2048),tf
.config
.LogicalDeviceConfiguration
(memory_limit
=2048)])
gpus
= tf
.config
.list_logical_devices
(device_type
='GPU')
print(gpus
)
輸出:2個虛擬的GPU
[LogicalDevice
(name
='/device:GPU:0', device_type
='GPU'), LogicalDevice
(name
='/device:GPU:1', device_type
='GPU')]
總結
以上是生活随笔為你收集整理的TensorFlow 2.0 - TFRecord存储数据集、@tf.function图执行模式、tf.TensorArray、tf.config分配GPU的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。