Intel issued a press release on its official website a few days ago, announcing that it has joined the PyTorch Foundation as a Premier member.
Intel said that it has been actively involved in the development of PyTorch for five years, helping to optimize PyTorch’s CPU inference performance, and using this to improve AI performance on Intel processors.

▲ Picture source Intel press release
In PyTorch 2.0, Intel optimized TorchInductor CPU FP32 reasoning, improved the reasoning and training performance of GNN in PyG, and optimized Int8 reasoning for the x86 CPU platform, using oneDNN Graph API to speed up Inference speed on the CPU.
Intel currently has four PyTorch maintainers, three active and one retired, namely Mingfei Ma (deep learning and software engineer), Jiong Gong (lead engineer and compiler front-end maintainer), Xiaobing Zhang (deep learning and software engineer), Jianhui Li (Senior Principal Engineer, now Emeritus Engineer, recognized by the PyTorch community for his past contributions and expertise in artificial intelligence), who maintain the CPU performance module and compiler front end and are currently Merged hundreds of PRs for PyTorch upstream.