Mohammad Mohammadi Amiri

 

Assistant Professor
Department of Computer Science
Rensselaer Polytechnic Institute (RPI)


Contact

MRC 331A
Troy, NY 12180, USA
[Email]
[Google Scholar] | [LinkedIn] | [ResearchGate] | [X (Twitter)]


Dr. Amiri was appointed an Assistant Professor in the Department of Computer Science at RPI in Fall 2023. His research revolves around the theme of collective intelligence. In today's rapidly evolving technological landscape, harnessing the power of data at the edge of the network has emerged as a pivotal catalyst for unlocking the true potential of collective intelligence. Dr. Amiri's research focuses on using this decentralized data to enhance the system intelligence beneficial for everyone while protecting the sensitive information.

Dr. Amiri received the Ph.D. degree in Electrical and Electronic Engineering form Imperial College London in 2019. He further received the M.Sc. degree in Electrical and Computer Engineering from the University of Tehran in 2014, and the B.Sc. degree in Electrical Engineering from the Iran University of Science and Technology in 2011, both with the highest rank. He is the recipient of the Best Ph.D. Thesis award from both the Department of Electrical and Electronic Engineering at Imperial College London during the academic year 2018-2019, as well as the IEEE Information Theory Chapter of UK and Ireland in the year 2019. Also, his paper titled "Federated learning over wireless fading channels" received the IEEE Communications Society Young Author Best Paper Award in the year 2022.

For motivated Ph.D. students with strong mathematical and analytical skills: Please send your CV to the above email address if you are interested in Machine Learning and comfortable with programming.

Research

Dr. Amiri's research revolves around the theme of collective intelligence. As the world becomes increasingly interconnected, the wealth of information generated at the periphery of our networks holds the key to a new era of innovation and insight. By tapping into this decentralized data wealth, we not only empower individuals and devices with real-time decision-making capabilities but also foster a collaborative environment where a multitude of perspectives converge to form a unified intelligence. This collective intelligence transcends traditional boundaries, enabling us to tackle complex challenges with unprecedented efficiency and creativity. Embracing the untapped potential of data at the edge is not just a technological advancement; it's a strategic imperative that drives us towards a future where our connected world thrives on the synergy of decentralized insights, propelling us to reach heights we never thought possible. Dr. Amiri's research focuses on using this decentralized data to enhance the system intelligence beneficial for everyone while protecting the sensitive information. His research interests include machine learning, data science, information theory, privacy, and optimization.

This is a list of Dr. Amiri's recent research activities:

  • Large language models (LLMs)
  • In the era of artificial intelligence (AI) and natural language processing, LLMs have emerged as transformative tools, revolutionizing the way we interact with and extract insights from vast troves of textual data. These LLMs, characterized by their enormous size and remarkable capabilities, have found applications across a spectrum of domains, from chatbots and virtual assistants to content generation, translation, information retrieval, and image recognition. Many of these applications rely on adapting one large-scale, pre-trained LLM to multiple downstream applications. Such adaptation is usually done via fine-tuning, which updates the parameters of the pre-trained model. Fine-tuning such massive models is highly resource intensive and require a considerably large amount of on-device memory and compute which can be significantly expensive in terms of time, energy, computing resources, and carbon footprint. Thus, the effective deployment of LLMs yet remains contingent on an imperative challenge: the resource-efficient fine-tuning of these colossal linguistic powerhouses.

  • Data valuation/data markets
  • Data is the main fuel of the modern world enabling AI and driving technological growth. The demand for data has grown substantially, and data products have become valuable assets to purchase and sale since it is extremely valuable for sectors to acquire high quality data to discover knowledge. As a valuable resource, it is important to establish a principled method to quantify the worth of the data and its value for the data seekers. This is addressed via data valuation which is the essential component for realization of a fair data trading platform for owners and seekers.

  • Privacy-preserving machine learning/federated learning
  • Today a tremendous amount of data is being generated at different network layers that can be utilized to improve the intelligence of many applications. Privacy-preserving machine learning techniques, such as federated learning, have emerged to exploit the decentralized data while the data stays local to the owner’s device. Specifically, federated learning aims to fit a global model to the decentralized data available locally at the edge devices. The devices receive the global model from the server, update it using their local data, and share the local updates with the server. The server aggregates the local updates, where a common aggregation rule is averaging the local updates, and updates the global model. Besides its applicability, this algorithm imposes various challenges inclduing privacy concerns, communication overhead, non-IID data distribution, model heterogeneity, etc.

  • Deep learning
  • The theory of deep learning involves a comprehensive study of the fundamental principles and mathematical foundations that underlie neural networks. This includes understanding the intricacies of network architectures, optimization algorithms, regularization techniques, generalization properties, and how these components contribute to the learning process within deep learning models. The goal is to elucidate the theoretical aspects to improve model interpretability, robustness, and overall performance, advancing the field and its applications.

Publications

Awards

  • IEEE Communications Society Young Author Best Paper Award (2022), paper "Federated learning over wireless fading channels", IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3546-3557, May 2020.
  • Best PhD Thesis (2019), IEEE Information Theory Chapter of UK and Ireland.
  • Eryl Cadwallader Davies Prize - Outstanding PhD Thesis (2019), EEE Department, Imperial College London.
  • EEE Departmental Scholarship (2015 - 2019), Imperial College London.
  • Excellent Student (2014), Ranked 1st among all M.Sc. students in ECE Department, University of Tehran.
  • Excellent Student (2011), Ranked 1st among all B.Sc. students in EE Department, Iran University of Science and Technology.

Invited Talks

  • Collective intelligence, Bell Labs, May 2023.
  • Decentralized data valuation, Decentralized Society + Web3 Research Panel, Media Lab Fall Meeting, Massachusetts Institute of Technology, Oct. 2022.
  • Federated edge machine learning, Keynote speaker at Futurewei University Days Workshop, Aug. 2021.
  • Federated edge learning: advances and challenges, Keynote speaker at IEEE International Conference on Communications, Networks and Satellite (COMNETSAT), Dec. 2020.
  • Federated edge learning: advances and challenges, King's College London, Nov. 2020.
  • Federated edge learning: advances and challenges, University of Maryland, Oct. 2020.
  • Federated edge learning: advances and challenges, University of Arizona, Oct. 2020.
  • Federated learning: advances and challenges, Virginia Polytechnic Institute and State University (Virginia Tech), Oct. 2020.
  • Fundamental limits of coded caching, Ohio State University, Nov. 2016.
  • Fundamental limits of coded caching, Stanford University, Nov. 2016.

Academic Services

  • Technical program committee
    • IEEE International Conference on Distributed Computing Systems, 2024.
    • IEEE Global Communications Conference (Globecom), Wireless Communications for Distributed Intelligence (WCDI), 2023.
    • IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2022.
    • IEEE International Conference on Microwaves, Communications, Antennas Electronic Systems, 2021.
    • IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2021.
    • IEEE International Conference on Communications (ICC) Workshop - Edge Learning for 5G Mobile Networks and Beyond (EdgeLearn5G), 2021.
    • IEEE International Conference on Communications (ICC): SAC Machine Learning for Communications Track, 2021.
    • IEEE International Conference on Communications, Networks and Satellite (COMNETSAT), 2020.
    • IEEE Global Communications Conference (Globecom), Cognitive Radio and AI-Enabled Networks (CRAEN), 2020.
    • Conference on Information Sciences and Systems (CISS), 2020.
    • IEEE International Conference on Communications (ICC) Workshop - Edge Machine Learning for 5G Mobile Networks and Beyond (EML5G), 2020.

Openings

I am looking for motivated Ph.D. students with strong mathematical and analytical skills. Please send me your CV if you are interested in Machine Learning and comfortable with programming.