Distributed machine learning with Python accelerating model training and serving with distributed systems /

Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...

Full description

Bibliographic Details
Main Author: Wang, Guanhua
Format: Electronic Book
Language:English
Published: Birmingham : Packt Publishing, Limited, 2022
Subjects:

Internet

This item is not available through BorrowDirect. Please contact your institution’s interlibrary loan office for further assistance.

Stanford University

Holdings details from Stanford University
Call Number: INTERNET RESOURCE