Large Language Models (LLMs) have become indispensable across numerous applications. However, these LLMs differ significantly in their capabilities and the computational or energy costs they incur. Given the varying demands of individual queries—such as differences in domain or complexity—relying on a single LLM for an application is often suboptimal. Whether the model is the largest and most energy-intensive, or has the highest average test performance, it may not always be the ideal choice. As a result, selecting an LLM that balances both accuracy and energy efficiency for a specific application remains a challenging task. To navigate this spectrum of complexity, we propose PEARL; a routing mechanism that leverages a proxy model to assess the complexity of each incoming query. Guided by the assessment and prediction of performance and energy consumption parameters, our mechanism enable decision-makers to configure an upper bound for the energy consumption of their LLM inference and assign queries to the most performing LLM while not exceeding the energy consumption limitations they might put on their infrastructure. In addition to being configurable/dynamic, we demonstrate that our approach significantly reduces energy consumption on benchmark datasets (i.e., more than 18% in certain cases) while achieving higher accuracy.