Skip to main content
SHARE
Publication

Comparative analysis of model-free and model-based HVAC control for residential demand response

by Kuldeep R Kurte, Kadir Amasyali, Jeffrey D Munk, Helia Zandi
Publication Type
Conference Paper
Book Title
BuildSys '21: Proceedings of the 8th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
Publication Date
Page Numbers
309 to 313
Conference Name
Second SIGEnergy Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (RLEM)
Conference Location
Coimbra,, Portugal
Conference Sponsor
ACM, SIGEnergy
Conference Date

In this paper, we present a comparative analysis of model-free reinforcement learning (RL) and model predictive control (MPC) approaches for intelligent control of heating, ventilation, and air-conditioning (HVAC). Deep-Q-network (DQN) is used as a candidate for model-free RL algorithm. The two control strategies were developed for residential demand-response (DR) HVAC system. We considered MPC as our golden standard to compare DQN's performance. The question we tried to answer through this work was, What % of MPC's performance can be achieved by model-free RL approach for intelligent HVAC control?. Based on our test result, RL achieved an average of ≈ 62% daily cost saving of MPC. Considering the pure optimization and model-based nature of MPC methods, the RL showed very promising performance. We believe that the interpretations derived from this comparative analysis provide useful insights to choose from various DR approaches and further enhance the performance of the RL-based methods for building energy managements.