by Brian Kramak, AWS TruePower
Modern computers and the systems that run on them are amazingly powerful. They can “auto-magically” gather almost any type of data, at any frequency, and store it away for later examination. In the renewable energy generation space, SCADA systems collect data from hundreds of sensors that monitor equipment health, performance and state of operation. The value of that data, and its ability to optimize plant performance, depends on its quality.
My formal Navy training was as a mechanic, but I’ve always been drawn to computers and databases. Long before the advent of the World Wide Web, I remember working with DB2 (‘DataBase 2’) for Navy-related matters, and then with Microsoft Access for storing information on the state of certification of different wind turbine components. An advanced degree later (Project Management and Operations Research), I learned a programming language called R and how much more powerful it is than MS Excel for dealing with large volumes of information. I hesitate to use the phrase big data, as I’m only working with tens of millions of records and don’t want to offend any of the truly ‘big data’ experts out there. 🙂
So why the history lesson all the way back to DB2? Because what was true then is true now: Garbage in equals garbage out. In this blog, I’m going to chat about the challenges I face as a data analyst when handling performance data from a wind or solar energy plant. I often struggle to find information nuggets in the supposedly golden mountain of data because of avoidable quality issues. Here’s a list of the top five problems I encounter and how they can be overcome so that the plant owner/operators are mining true gold rather than fool’s gold.
On about every other project, I end up with at least one parameter in the dataset having a very clear name, when in fact it is a wrong name. In a recent example, wind turbine power was supposed to be bi-directional—that is, it was supposed to go negative when the turbine was offline, indicating electrical consumption. This was for a large array, and for two of the three turbine models in use this was true, but for the third model it showed only the power generated—it never went below zero. This was an aggregation issue. The fix proved relatively simple. When designing data aggregation systems, such as PI (a widely used industry system), a complete commissioning, circuit by circuit, is needed to ensure that all collected data is homogeneous.
2. Sensors degrade!
Many types of sensors degrade with time, or their calibration drifts. The biggest culprits are often the mechanical anemometers used for measuring wind speed. If turbine-specific power curves are being used to determine lost energy, you need to monitor the performance of anemometers. The fix is simple: put them in a calibration program, or swap them out every 2-3 years (or change to a non-mechanical design). If you haven’t uprated a given wind turbine but see that the power curve is slowly marching to the left, what’s really happening is that the anemometer drag is increasing. Unfortunately, your turbine isn’t magically performing better! Some sensors need to be routinely cleaned, such as solar sensors. Having a sound Quality Assurance plan in place that addresses the entire data collection process—parameter definition, data accuracy and recovery objectives, sensor redundancy, calibration schedules, and so on—is essential.
3. Change logs are critical!
Say you have several years of data and are conducting a sensor calibration. Documenting exactly when the anemometer slope and offset values are changed, or when the old sensors are replaced with new ones, is critical. The fix is easy—just start documenting the changes in a change log of some sort, and make sure they’re saved as data (rows and columns, not just a text entry that can’t be easily processed). We’re not talking about just sensor change-outs, it is also critical to document the installation dates of all significant performance improvements. You don’t want pre-upgrade data being used to determine post upgrade energy yields.
Here’s an example. If you have equipment installed in northern climates where icing is an issue, install an ice sensor! And make sure your air temperature, humidity and precipitation sensors are up to snuff, too, since they can be used as indicators of icing conditions. It is a sad fact that many older turbine models are not equipped to know when they are encountering icing conditions, which impair their performance. If you want to know about icing conditions to reduce stress on your turbines, or your analysts are expected to document causes of lost energy due to various sources, ice sensors and properly operating meteorological sensors are critical. The fix is cheap—several hundred bucks, and the cost of paying attention to the equipment.
5. Consistency is critical!
Addressing all of the above will improve the quality and consistency of your plant’s operations data. Having a well-placed meteorological mast will help greatly too. Ideally, the mast should be operational in a location before and after plant commissioning that is representative of the plant’s resource conditions. For a wind farm, the mast should be located upwind of the farm relative to the prevailing wind direction; a lidar system would suffice as well. Then maintain it consistently and the data it yields will be as valuable as gold why trying to diagnose operational issues in the future.
The quality of the answers that data analysts can give is dependent on the quality and accuracy of the data being collected at the project. Owner/operators benefit when making data monitoring a high priority. Doing so enhances their return on investment by quickly enabling the diagnosis of underperformance issues. Instituting a formal data quality assurance program is one of the best ways to ensure success.
Filed Under: O&M