High-performance computing: The need for speed

A subsurface sensation, ConocoPhillips’ supercomputer helps technical experts find and produce hydrocarbons, accelerate innovation

QUICK READ:
  • Supercomputer optimizes seismic imaging, reservoir simulation and technology development
  • Geoscientists, engineers and data scientists use the supercomputer to make faster, better decisions
  • High-performance computing (HPC) team manages the supercomputer, enhances workflows
By Gus Morgan

To stay atop the fast-paced exploration and production (E&P) industry, a realm where innovation drives performance, ConocoPhillips has a need, a need for speed.

And it has just the tool for the job.

ConocoPhillips’ supercomputer — known as a high-performance computing (HPC) cluster — plays a key role in optimizing the company’s E&P operations.

Norman Weathers, Supervisor, HPC, oversees the team that manages the supercomputer.

By empowering faster and better decisions, this transformational tech tool not only helps ConocoPhillips lower cost of supply, it gives the company a competitive advantage.    

ConocoPhillips’ geoscientists, engineers and data scientists use the supercomputer to interpret, model and drill the subsurface; develop technology; and improve workflows. 

A subsurface sensation, the supercomputer plays a key role in accelerating the time to first oil; reducing drilling uncertainties and risks such as dry holes and fault avoidance; and optimizing reservoir understanding and production.

Just how important is the supercomputer to ConocoPhillips?

“It’s a critical piece of the company’s ability to be successful,” said Norman Weathers, Supervisor, HPC Operations, a subgroup of Geoscience IT Operations. 

It’s big. It’s fast. It’s a supercomputer.
 ConocoPhillips’ supercomputer, located in CyrusOne’s facility in northwest Houston, is crucial to ConocoPhillips’ exploration and production operations.

ConocoPhillips’ HPC cluster fills a 10,000-square-foot room at Cyrus One, a data center facility in northwest Houston. It has thousands of connected compute nodes that work together like a single system. Functioning like a giant brain, the HPC cluster takes a divide-and-conquer approach to the work it's given, running parallel applications efficiently, reliably and quickly.

Researchers and businesses use supercomputers for problems that would overwhelm ordinary computers, leveraging their immense compute power for computational modeling, simulations and data analytics.

Dave Glover, HPC Systems Analyst

For example, scientists are using Frontera, the fifth-fastest supercomputer in the world, to solve pressing scientific and engineering challenges. Currently, they're using Frontera in the battle against COVID-19; and in 2019, they called upon this supercomputer for storm-surge forecasting during Hurricane Dorian. As an industrial partner with the Texas Advanced Computing Center at UT Austin, ConocoPhillips has access to Frontera for testing purposes. 

“We meet with them several times a year to learn best practices and advanced HPC methods,” said Dave Glover, HPC Systems Analyst.

Discussing ConocoPhillips’ high-performance computing cluster are, from right, Luis Velazquez, Director, Geoscience IT Operations; Nick Paladino, HPC Operations Team Lead; Norman Weathers, Supervisor, HPC Operations; Bob Beets, Manager, Server, Storage and Data Center Operations; and Paul Kissell, IT Director, Geology, Geophysics and Reservoir Engineering Services.

In terms of its size and power, ConocoPhillips’ supercomputer is the equivalent of a heavyweight boxer. While other supercomputers may be faster, ConocoPhillips’ machine delivers fit-for-purpose performance to the company’s business units and functions.

Ranked among the top 50 supercomputers in the world, ConocoPhillips HPC cluster’s computing power can be measured at 9 Petaflops, or 9 thousand million million (or 9x1015) floating point calculations per second. It has 5,136 nodes, 152,064 cores, 1.5 petabytes of memory, 106 compute racks, 55 petabytes of parallel storage and a network with connections speeds of 2.98 terabytes a second. Linked to a high-speed network, the HPC cluster runs high-performance file systems.

 Luis Velazquez, Director, Geoscience IT Operations

“It’s a significantly sized research computer that serves an important business function,” said Luis Velazquez, Director, Geoscience IT Operations. 

The HPC cluster is always evolving, he said, with new hardware or software updates.

“Its design is driven by business needs,” Velazquez said. “We make sure they have enough resources to get their work done efficiently and in a timely manner.”

The supercomputer is the perfect tool to solve the Subsurface team’s data-intensive challenges. 

“Seismic gets the lion’s share,” Weathers said, noting this group uses 4,656 of the 5,136 nodes. And the company’s reservoir engineering group, the second-biggest user group, uses 480 nodes.

And while the Geophysical Services and Reservoir Engineering teams get most of the supercomputer’s processing power, other teams use it, too.

“We always want to help bring more end users to the HPC cluster,” Weathers said.

For instance, Marine uses it for computational fluid dynamics, and Oil Sands uses it for steam allocation optimization in Canada. In addition, reservoir engineers in the business units, along with data scientists from the company’s Analytics Innovation Center of Excellence (AICOE) use the supercomputer to process complex calculations and draw insights from big data sets.

The supercomputer also drives technology development, such as CSI, data analytics, drilling, full waveform inversion, computational fluid dynamics and electromagnetics.


What is an HPC cluster? A high-performance computing (HPC) cluster is a collection of many separate servers (computers) called nodes that are connected by a fast interconnect. HPC involves aggregating computing power so it delivers much higher performance than one could get out of a typical desktop computer to solve science, engineering or business problems.

HPC compute node

Basics components of a cluster:

  • Multiple computers
  • Connected over a network
  • With a shared file system
  • Supports running a parallel application across several computers

What are nodes and cores? A node is the physical self-contained computer unit in a computing cluster. It has its own processors, memory, I/O bus and storage. A core is a single processor compute core within a node. Each node can have multiple processors. Each processor can have multiple cores.


Seismic imaging, deep insight

Because seismic surveys and interpretation involve massive datasets and complex algorithms, ConocoPhillips’ supercomputer is a vital business tool for the Seismic Imaging team.

The supercomputer enables them to process all the data collected from compressive seismic imaging, or CSI, ConocoPhillips’ proprietary technology that helps the seismic team gain clearer pictures of rock formations below the Earth’s surface.

The Seismic Imaging team uses the supercomputer for Compressive Seismic Imaging (CSI), a proprietary technology that delivers enhanced images of the subsurface. 

“Having a high-performance computing system is a critical element to the compressive seismic imaging program,” said Brad Bankhead, manager, Seismic Imaging. “Compressive seismic is dealing with problems in the trillions of computations.”

Since the introduction of CSI, the amount of seismic data collected and processed has ballooned. For instance, 10 years ago, Bankhead said a typical seismic survey had 15 billion different samples. Today, it’s 30 trillion samples.

Brad Bankhead, manager, Seismic Imaging

“Seismic has always been big data,” Bankhead said. “The physics involved is extremely complicated, and the algorithms are compute intensive.”

Brian Macy, principal geophysicist, said the supercomputer’s processing power enables them to create seismic programs with more complicated physics.

Brian Macy,  principal geophysicist

As a research geophysicist, Macy primarily focuses on developing seismic processing and imaging algorithms. He collaborates with the Seismic Imaging group and the HPC team to ensure they have an effective and performant HPC environment for seismic processing and imaging projects.

“The HPC team is vital to our success,” Macy said. “They collaborate with us closely to understand our challenges and how to optimize the entire system. Our HPC environment is fit-for-purpose for our seismic imaging workflows. When new challenges or needs come up, they can adapt quickly to help us succeed.”

It’s important to note that all computers require coding; it’s how programmers get a computer to perform a task. That said, while the Seismic Imaging team writes the research codes for the supercomputer, the HPC Operations team optimizes them.

And that optimization work is vital, Bankhead said.

For instance, on one seismic project, Bankhead said his team estimated it would take five to six months to run their research codes on the supercomputer. However, the HPC Operations team optimized the codes, which reduced the runtime to one to two weeks.

Tim Osborne
Tim Osborne,  Geophysical Programmer, HPC Operations

Tim Osborne, Geophysical Programmer, HPC Operations, works to improve runtimes on the supercomputer. 

“My job is to find what computer-related barriers are keeping someone from being as productive as possible and removing those barriers,” he said. This includes finding bottlenecks within a program, he said, or reworking the science of a program to be more computer friendly.

“The Seismic Imaging development and processing team is constantly creating new algorithms and testing to see if they work,” Osborne said. “Once we know they work, then if performance is a bottleneck for the algorithm, they send it my way to optimize so seismic processors can incorporate it into their workflows.”

When needed, Osborne also updates the code base, so it works optimally on the newest hardware.

“The HPC team helps ConocoPhillips to accomplish science unattainable without specialized computers,” he said. “We want to make new or better science possible by making programs run orders of magnitude faster: whether that’s changing a program from running in a day to a lunch hour, hours to minutes, or weeks to days.”

Reservoir modeling and simulation

As supercomputers excel at large-scale simulations, they are the optimal tool for producing reservoir simulation models and production forecasts.

And much like seismic imaging, the mathematical calculations used for reservoir simulation are complex; it would take a normal computer months to produce a result. But ConocoPhillips’ supercomputer can process the calculations involving reservoir simulations much faster, with more detail, clarity and accuracy.

 Claude Scheepens, manager, Reservoir Modeling & Simulation, said ConocoPhillips’ supercomputer accelerates her team’s workflow.

Claude Scheepens, manager, Reservoir Modeling & Simulation, said the supercomputer’s processing power speeds up her team’s workflows.

“We look at runtime,” Scheepens said. 

A few years ago, the HPC Operations team helped Scheepens’ team with software upgrades and new nodes, boosting productivity.

Before this refresh, Scheepens said one particularly large forecasting model had a runtime of 16-24 hours. This meant that the reservoir engineers could only run one model a day.

However, after the refresh and some code optimization, the model’s runtime dropped to eight hours, resulting in more simulation runs per day.

These speed gains have opened the door to uncertainty quantification studies, by allowing reservoir engineers to run many iterations of simulation models using different parameters sampled from various statistical distributions.

 System Analyst Joseph Sonnier, HPC Operations, is embedded with the Reservoir Modeling & Simulation team. Sonnier provides technical support and optimizes applications that run on ConocoPhillips’ supercomputer.

System Analyst Joseph Sonnier, an HPC team member, works as an embedded member of the reservoir modeling and simulation group. He’s a facilitator and liaison between two disciplines, one having a business focus and the other an IT technical focus.   

“I want our reservoir engineers to make the most effective use of current and future computer technologies,” he said.

Sonnier is constantly monitoring the reservoir cluster resources to make sure the reservoir simulations are running smoothly. Any delay to simulations can impact the quality of reservoir engineers’ forecasting.

“In most cases, I’m trying to head off any potential problems before the end users even notice,” he said. 

For instance, Sonnier cited a functional improvement they created that allows reservoir simulation jobs to burst into the larger seismic cluster when all the reservoir compute nodes are busy.

“The ability to burst helps keep the reservoir jobs from waiting in the queue longer than needed,” he said.

The Reservoir Modeling & Simulation team creates dynamic reservoir characterizations for improved production forecasting.

Sayantan Bhowmik, Senior Reservoir and Analytics Development Engineer, said Sonnier is crucial to their success.

“You would not understand what Joseph does until something breaks,” Bhowmik said, “and then you would know how valuable he is.”

Syed Shibli, Tool Development and Application, Reservoir Engineering, said ConocoPhillips’ proprietary in-house reservoir modeling and uncertainty package, known as ABACUS, is optimized for the team’s nodes.

And it’s Sonnier’s expertise that makes such optimizations possible, Shibli said.

“Our workflow is very high end,” Shibli said. “It’s very specialized. The HPC Operations team is critical to our workflow.”

Reservoir & Simulation team
ConocoPhillips’ Reservoir Modeling & Simulation team relies on the supercomputer’s processing power to accelerate their workflows. System Analyst Joseph Sonnier, HPC Operations, (yellow shirt) reviews a reservoir model with Reservoir Modeling & Simulation team members, from left, Sayantan Bhowmik, David Bunch, Nick Rubenstein and Vanessa Angel.
Data science, advanced analytics 

While the Seismic Imagining and Reservoir Engineering groups are the supercomputer’s main users, ConocoPhillips’ advanced analytics team also uses it to develop machine learning and optimization algorithms.

Doug Hakkarinen, AICOE, Staff Data Scientist

Machine learning is useful because it automates processes and saves time. This allows humans to focus their time and energy on more complex decision making. In machine learning, data scientists build, train and deploy a model to generate predictions. Finding the optimal model is important, but it’s also time consuming.

Doug Hakkarinen, AICOE, Staff Data Scientist, said the supercomputer speeds up the advanced analytics team’s modeling workflows. This enables them to explore large numbers of features, algorithms and hyperparameters to improve modeling solutions for the business.

“Fortunately, most of these optimization methods can benefit from large-scale parallelism such as that provided on the HPC,” he said.

In recent years, Hakkarinen said the advanced analytics team has used the supercomputer to solve various global business problems. These include a steam allocation optimization project in Canada; a train efficiency optimization project at the Teesside oil terminal in the United Kingdom; a reservoir dynamic model in China; and a rod-pump failure project in the United States.     

HPC Operations

All this activity wouldn’t be possible without ConocoPhillips’ HPC Operations team, which manages, monitors and maintains the supercomputer.

This lean, functional group handles the supercomputer’s system architecture, operations and application support.

Nick Paladino, HPC Operations Team Lead, shows off a Data Direct Networks drive array, which allows for scalable storage.

HPC Operations Team

Known for their computing expertise, the HPC Operations team ensures the supercomputer fulfills its performance expectations; these specialists are the interface between ConocoPhillips’ technical experts and a complex, powerful technology. 

How they do it:

Systems architecture - Creating and expanding the HPC cluster

Systems Operations - Ensuring the HPC cluster hardware is up and running optimally

Application support - Maintaining third-party applications and assisting business units/corporate functions with internally developed applications

“We’re a relatively small team that does everything,” Weathers said. “We treat the parts, but we also treat the whole.”

The HPC Operations team collaborates with the business units and functions to ensure they have the computing power they need. They update and expand the cluster annually. 

“We want there to be enough capacity so people can do experiments and run iterative-type workflows,” Weathers said. “So we want them to use this instrument as they need to, to get the best result without waiting a long time.”

The HPC Operations team must balance the constant push to stay on the best cost-performance curve, while still matching the business units and functional group’s changing objectives.

“We try to optimize the system to ensure the peaks in demand can be handled without impacting the people or process,” Weathers said.

The HPC Operations team provides technical support, optimization, scheduling and tailored problem solving. They also load, move, process, store and archive data. In addition, they source, test, buy and install computing equipment on behalf of the business units and functions.

The ultimate E&P tool

Moving ahead, ConocoPhillips’ top minds will increasingly call upon the supercomputer to help them make faster, better decisions. No doubt, the need for speed will only accelerate, a byproduct of ConocoPhillips’ data-driven culture.

It’s why having a supercomputer is vital to ConocoPhillips’ success; it’s the ultimate E&P tool.