- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
The purpose of this article is to provide a practical, expert-level workflow for applying Gibbs–Duhem consistency tests to vapor–liquid equilibrium data sets so that experimental data quality can be screened before correlation, simulation, and distillation design.
1. Why Gibbs–Duhem consistency matters for VLE data.
Vapor–liquid equilibrium data are not only measurements of pressure, temperature, and compositions, but also an implicit measurement of mixture thermodynamics through activity coefficients and excess Gibbs energy.
Because the Gibbs–Duhem relation is a fundamental thermodynamic constraint, a VLE data set that strongly violates it is internally inconsistent and will typically produce unstable regressions, nonphysical model parameters, and poor extrapolation in process simulation.
Thermodynamic consistency testing is therefore best treated as an early gate in any VLE workflow, before fitting NRTL, UNIQUAC, Wilson, or equation-of-state mixing rules.
Note : A data set can appear visually smooth in P–x–y or T–x–y plots and still fail thermodynamic consistency because Gibbs–Duhem violations are often subtle and distributed across composition.
2. The Gibbs–Duhem constraint used in VLE data screening.
A convenient form for mixture consistency testing relates activity coefficients to changes in pressure and temperature through excess properties.
For nonreacting mixtures, one common statement used for consistency testing can be written as a balance that includes the composition-weighted activity coefficient differentials, plus pressure and temperature correction terms involving excess volume and excess enthalpy.
For many routine binary VLE data sets at low to moderate pressure, practical tests reduce the problem to evaluating whether activity coefficients inferred from the reported equilibrium satisfy an integral or differential Gibbs–Duhem condition within empirical tolerance.
2.1 Data requirements and prechecks that prevent false failures.
Before running any Gibbs–Duhem consistency test, basic data hygiene checks must be completed because many apparent failures are caused by formatting or reporting issues rather than thermodynamics.
All liquid and vapor mole fractions should be checked to sum to unity within stated experimental uncertainty, and any rounding conventions should be made explicit and applied consistently.
Each data point should represent a single equilibrium state with self-consistent reporting of T, p, x, and y, and units should be normalized across the entire set.
For isobaric data, the full boiling temperature range must be credible because some integral tests use the total temperature span directly as part of the pass criterion.
3. Converting VLE measurements to activity coefficients for testing.
Most Gibbs–Duhem VLE tests operate on activity coefficients or on an excess Gibbs energy function derived from them.
The required activity coefficients are computed from phase equilibrium, which is fundamentally equality of component fugacities between vapor and liquid phases.
At low pressure where vapor-phase nonideality is negligible, the working relationship often reduces to a modified Raoult form, where vapor composition, total pressure, liquid composition, and pure-component vapor pressure define γ values.
At elevated pressure or for strongly nonideal vapor phases, vapor fugacity coefficients and Poynting corrections should be included so that the computed γ values represent liquid nonideality rather than mixed vapor and liquid nonideality.
Note : Applying an ideal-vapor shortcut to high-pressure VLE data can create apparent Gibbs–Duhem inconsistency even when the underlying measurements are good.
3.1 Minimal calculation checklist for binary data sets.
The following checklist keeps the activity-coefficient calculation traceable and reproducible for later audits and model regression.
| Item. | What to compute or verify. | Typical output used by tests. |
|---|---|---|
| Normalization. | Ensure x1 + x2 = 1 and y1 + y2 = 1 for each point. | Normalized x1, y1. |
| Pure vapor pressures. | Evaluate P1sat(T) and P2sat(T) consistently for each point. | Pisat for activity coefficients and endpoint checks. |
| Activity coefficients. | Compute γ1 and γ2 from the chosen fugacity framework. | ln(γ1/γ2) and GE/RT. |
| Derived thermodynamic function. | Compute GE/RT = x1 ln γ1 + x2 ln γ2. | GE/RT for differential and limiting tests. |
4. Integral Gibbs–Duhem consistency testing with the Herington area test.
The Herington area test is widely used because it is simple to implement and provides a clear visual diagnostic using a single plot of ln(γ1/γ2) against x1.
The method compares the signed areas above and below the zero line in the ln(γ1/γ2) versus x1 curve.
Let A be the area above zero and B be the magnitude of the area below zero over the full composition range.
The Herington D metric is computed as D = 100 · |(A − B) / (A + B)|.
For isobaric data sets, a second metric J is commonly used to account for the total boiling temperature range using J = 150 · |(Tmax − Tmin) / Tmin|.
Empirical pass criteria often applied in practice are D < 5 for isothermal data and |D − J| < 10 for isobaric data.
4.1 Practical integration details that determine robustness.
The key numerical task is integrating ln(γ1/γ2) over x1 using an interpolation that does not introduce oscillations.
Low-order polynomials, monotone splines, or smooth local regression are often more stable than a high-order global polynomial, especially when experimental points cluster near the ends.
If experimental points do not include compositions close to x1 = 0 or x1 = 1, extrapolation should be performed cautiously because the area test is sensitive to endpoint behavior.
Note : If the mixture is near ideal, ln(γ1/γ2) can be small everywhere so the denominator (A + B) becomes small and D can become numerically unstable, so additional near-ideal screening is recommended.
4.2 Example implementation sketch for the Herington D and J metrics.
# Inputs. # x1: array of liquid mole fractions for component 1 in ascending order. # ln_ratio: array of ln(gamma1/gamma2) at the same x1 points. # Tmin, Tmax: scalar temperatures for the isobaric data set, in Kelvin.
1) Compute signed area by trapezoidal integration.
area_signed = 0.0
for i in range(len(x1) - 1):
dx = x1[i + 1] - x1[i]
area_signed += 0.5 * (ln_ratio[i] + ln_ratio[i + 1]) * dx
2) Compute A and B by splitting segments across the zero line.
A = 0.0
B = 0.0
for i in range(len(x1) - 1):
xL = x1[i]
xR = x1[i + 1]
yL = ln_ratio[i]
yR = ln_ratio[i + 1]
dx = xR - xL
# No sign change.
if (yL >= 0.0 and yR >= 0.0):
A += 0.5 * (yL + yR) * dx
elif (yL <= 0.0 and yR <= 0.0):
B += 0.5 * (abs(yL) + abs(yR)) * dx
else:
# Linear crossing at y = 0.
frac = abs(yL) / (abs(yL) + abs(yR))
xZ = xL + frac * dx
if (yL > 0.0 and yR < 0.0):
A += 0.5 * (yL + 0.0) * (xZ - xL)
B += 0.5 * (0.0 + abs(yR)) * (xR - xZ)
else:
B += 0.5 * (abs(yL) + 0.0) * (xZ - xL)
A += 0.5 * (0.0 + yR) * (xR - xZ)
3) Compute D, and J for isobaric sets.
D = 100.0 * abs((A - B) / (A + B))
J = 150.0 * abs((Tmax - Tmin) / Tmin)
4) Practical pass checks.
Isothermal: pass if D < 5.
Isobaric: pass if abs(D - J) < 10.
5. Modeling-based Gibbs–Duhem screening with the Van Ness test.
The Van Ness test is often treated as a modeling capability test because it measures how well a reasonable activity-coefficient model can reproduce the experimental VLE data.
A common implementation fits an activity-coefficient model to reproduce bubble pressure and vapor composition and then evaluates average deviations for pressure and vapor-phase composition.
Define the average percent pressure deviation as Δp and the average absolute vapor composition deviation as Δy using the experimental and calculated values over the data set.
A practical pass rule used in many screening workflows is that both Δp and Δy should be less than approximately 1 in their respective definitions for the data set to pass.
This test is valuable because it detects data sets that may pass a purely integral Gibbs–Duhem check but still cannot be represented by standard excess Gibbs energy models without large residuals.
5.1 When Van Ness failures do not imply bad experiments.
A Van Ness failure can occur when the chosen model form is inadequate for the true physics of the mixture, such as strongly associating systems, liquid–liquid splits near the measurement conditions, or composition-dependent behavior not captured by a simple parameterization.
In such cases, a model upgrade or a different regression objective may be needed before labeling the measurements as inconsistent.
6. Differential Gibbs–Duhem screening with the point test.
Differential tests use the fact that Gibbs–Duhem constrains the slope of excess Gibbs energy with composition through activity coefficients.
A practical point test compares a slope computed from a smooth fitted GE/RT curve with a term involving ln(γ2/γ1) and additional correction terms that depend on whether the data are isothermal or isobaric.
The test yields an overall percent deviation metric, often denoted δ, obtained by averaging absolute pointwise deviations across data points.
A commonly used screening criterion is that the data set passes if δ is less than about 5.
Differential tests are sensitive to noise because numerical differentiation amplifies experimental scatter, so they should be applied only after a smooth but physically reasonable GE/RT representation has been obtained.
Note : If you differentiate raw ln γ values directly without smoothing, the point test will often fail due to numerical noise rather than true thermodynamic inconsistency.
7. Limiting-behavior screening with the infinite dilution test.
Infinite dilution behavior is constrained because GE/(x1 x2 RT) must approach finite limits related to the limiting activity coefficients as x1 approaches 0 or 1.
An infinite dilution test compares these limiting values inferred from the fitted GE function to the limiting ln(γ1/γ2) behavior near the ends of composition.
The test typically produces two percent-deviation metrics, I1 and I2, corresponding to the two dilution limits.
A practical pass rule used in many workflows is that both I1 and I2 should be less than about 30.
This test is particularly useful for catching endpoint inconsistencies caused by incorrect pure-component vapor pressures, calibration drift at extreme compositions, or poor sampling near x1 ≈ 0 and x1 ≈ 1.
8. Pure-component endpoint consistency check that prevents systematic bias.
Even when Gibbs–Duhem tests pass, it is still possible for a VLE data set to be inconsistent with pure-component vapor pressures at the endpoints.
A practical endpoint check compares the mixture bubble or dew pressure as composition approaches a pure component with the independently evaluated pure-component vapor pressure at the same temperature.
If the endpoint mismatch is large, the VLE data set can mislead model regression because the fit will compensate by distorting parameters across the entire composition range.
9. Recommended expert workflow for screening a VLE data set.
The most reliable practice is to apply multiple tests because each test is sensitive to different error modes, and reliance on a single pass or fail outcome can be misleading.
The workflow below is designed to be implementable in automation and suitable for routine VLE data reduction pipelines.
| Stage. | Action. | Primary failure modes detected. |
|---|---|---|
| Stage 1. | Data hygiene checks on T, p, x, y, and unit normalization. | Composition sum errors, unit mistakes, duplicated points. |
| Stage 2. | Compute activity coefficients using an appropriate fugacity framework. | Incorrect Psat use, neglected vapor nonideality at pressure. |
| Stage 3. | Run Herington area test and check D and J criteria. | Global integral inconsistency across composition range. |
| Stage 4. | Fit a standard GE model and run Van Ness deviations for p and y. | Inability of standard models to represent reported equilibrium. |
| Stage 5. | Run point test on a smooth GE/RT representation. | Local slope inconsistency and localized outliers. |
| Stage 6. | Run infinite dilution test using limiting behavior near endpoints. | Endpoint errors and incorrect pure-component vapor pressures. |
| Stage 7. | Decision rule based on patterns across tests and stated uncertainty. | False rejection from noise, false acceptance from a single test. |
10. Interpreting pass and fail outcomes in a way that improves model regression.
A fail does not automatically mean the experiment is unusable because some tests are conservative and thresholds are empirical.
However, repeated failures across integral, differential, and limiting tests strongly suggest systematic problems such as wrong vapor pressure inputs, equilibrium not fully reached, or composition measurement bias.
If only the differential test fails while integral tests pass, the most common root cause is noise combined with an overly aggressive differentiation scheme, and the remedy is improved smoothing and uncertainty-aware fitting.
If the area test fails while the modeling test passes, the root cause is often an inconsistency that is compensated by flexible model parameters, and the remedy is closer examination of the ln(γ1/γ2) curve and endpoint behavior.
If endpoint checks fail, regression should not proceed until pure-component vapor pressure consistency is restored because endpoint bias contaminates the entire fitted parameter set.
FAQ
Which Gibbs–Duhem consistency test should be used first for binary VLE data.
The Herington area test is often used first because it is simple, uses only ln(γ1/γ2) versus x1, and provides an immediate global diagnostic across the full composition range.
After that initial screen, a modeling-based test and at least one local sensitivity test are recommended to avoid acceptance based on a single criterion.
How should activity coefficients be computed for consistency testing at higher pressure.
At higher pressure, activity coefficients should be computed from fugacity equality using vapor-phase fugacity coefficients and, when relevant, liquid-phase Poynting correction so that the γ values represent liquid nonideality rather than combined vapor and liquid effects.
If ideal vapor assumptions are used outside their validity range, the resulting γ values can violate Gibbs–Duhem even for good experimental data.
What causes a near-ideal mixture to look inconsistent in an area test.
Near-ideal systems can produce very small ln(γ1/γ2) values, which makes the total positive-plus-negative area small and can make the D ratio numerically unstable.
In such cases, additional near-ideal screening and careful numerical integration are needed so that the test reflects physics rather than numerical sensitivity.
Why can a data set pass an integral test but fail a modeling-based test.
An integral test checks global compliance with Gibbs–Duhem, while a modeling-based test checks whether a specific thermodynamic model can reproduce the measured p and y values with small deviations.
A data set may satisfy the constraint but still contain patterns that are hard to capture with a chosen model form, so model choice and regression objective should be reviewed before rejecting the experiment.
How should pass and fail results be used in distillation model preparation.
Data that pass multiple tests can be prioritized for regression and used to anchor parameter estimation across the full composition range.
Data that fail endpoint checks or fail several independent Gibbs–Duhem tests should be excluded or corrected before regression to avoid biased parameters and unreliable extrapolation.
추천·관련글
- Record Linkage in Excel Power Query: Advanced Fuzzy Matching and Deduplication Guide
- Mastering Excel Dynamic Arrays with BYROW and BYCOL for Next-Level Automation
- Excel Manual vs Automatic Calculation Mode: Performance, Stability, and Best Practice Strategy
- Excel Dynamic Dashboards with Slicers and Dynamic Arrays (FILTER, SORT, UNIQUE)
- Optimize Multi-Threaded Calculation in Excel for Maximum Performance
- Power Query M Code: Combine Multiple Files from a Folder in Excel and Power BI
- Get link
- X
- Other Apps