MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library
使用pytorch做分布式訓練時,遇到錯誤:
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.解決方案1:在環境變量增加設置
export MKL_SERVICE_FORCE_INTEL=1解決方案2:在環境變量增加設置
export MKL_THREADING_LAYER=GNU問題分析:
Grepping conda manifests, libgomp is pulled in by?libgcc-ng, which is in turn pulled in by, uh, pretty much everything. So the culprit is more likely to be whoever's setting?MKL_THREADING_LAYER=INTEL. As far as that goes, well, it's weird.
import osdef print_layer(prefix):print(f'{prefix}: {os.environ.get("MKL_THREADING_LAYER")}')if __name__ == '__main__':print_layer('Pre-import')import numpy as npfrom torch import multiprocessing as mpprint_layer('Post-import')mp.set_start_method('spawn')p = mp.Process(target=print_layer, args=('Child',))p.start()p.join()See, if torch is imported before numpy then the child process here gets a GNU threading layer (even though the parent doesn't have the variable defined).
Pre-import: None Post-import: None Child: GNUBut if the imports are swapped so numpy is imported before torch, the child process gets an INTEL threading layer
Pre-import: None Post-import: None Child: INTELSo I suspect numpy - or ones of its imports - is messing with the?env?parameter of?Popen, but half an hour's search and I can't figure out how.
Ref:?https://github.com/pytorch/pytorch/issues/37377
總結
以上是生活随笔為你收集整理的MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 旧金山大学 计算机,旧金山大学(Univ
- 下一篇: 【Vitis Accel】2 - Vit