Deactivation of Heavy Oil Hydroprocessing Catalysts: Fundamentals and Modeling
Again, full reaction networks can only be computer-generated. Associated with these large reaction networks are network reduction techniques. Indeed, for complex feedstocks, the reaction network becomes so large that it becomes very difficult to handle during the simulation. Two main classes of network reduction techniques can be distinguished: a priori lumping techniques and a posteriori lumping techniques. In a posteriori lumping techniques, the full reaction network is first generated, before it is reduced into a more tractable network by a scientific approach.
For each reaction of the final reaction network, a rate equation needs to be proposed or generated, to which a number of rate parameters will be associated.
- Piecewise Linear Modeling and Analysis?
- Chess Problems.
- PANASONIC NV-GS400GC?
- Believe Not Every Spirit: Possession, Mysticism, & Discernment in Early Modern Catholicism.
For large reaction networks, one not only needs to propose ways to automatically generate the rate equations, but also to reduce the number of rate parameters that need to be identified. Last but not least, in order to be able to simulate the evolution of the composition in the reactor, one needs to provide an appropriate description of the feed, which can either be a molecular level description of the feed when detailed modeling techniques are used, or a lumped description of the feed.
No matter which modeling approach is selected to simulate complex chemical systems, the four following steps need to be clearly defined first: Feedstock composition: What is the required detail? In this review, detailed reactor models including vapor-liquid equilibrium thermodynamics, heat and mass transfer between phases or inside catalyst particles, external heat exchangers, etc.
In what follows, systems with several hundred or several thousand components, reaction intermediates, reactions and elementary steps will be considered. In such cases, two main strategies can be applied: a lumping strategy or a detailed modeling strategy. In this case, a mean-field approach together with a rate-determining step approximation is generally used, as will be shown in the examples below. The corresponding rate equations are then written at a macroscopic level for a limited number of pathways between the analytically observable lumps of components, while the corresponding continuity equations are solved by means of a classic determinist solver for a set of ordinary or Partial Differential Equations PDE.
In the detailed modeling strategy, the chemical detail is generally retained at the molecular level, or even at the level of the reaction intermediates. For these cases, we will describe techniques to provide a detailed feedstock composition, to generate and reduce full reaction networks, to derive the corresponding rate equations, and to decrease the number of rate parameters. Concerning the reaction rates, one can either use a mean-field approach and apply the QSS approximation, or calculate the complete solution by means of MC methods, as will be illustrated below.
The lumps are then considered as homogeneous ensembles on which a kinetic model normally used for the molecular compounds can be applied. This approach is often used for processes where the molecular characterization of the reactant mixtures is difficult or impossible because the feedstock is too complex, as is the case in the majority of petroleum refining processes catalytic reforming, hydrotreating, hydroprocessing, catalytic cracking, thermal coking, etc.
The development of a lumping approach usually proceeds through the following steps: description of the feedstock by choosing a set of lumps;. The choice of lumps is always a compromise between the capabilities of the analytical techniques to characterize and quantify them on one hand, and the needs of the final user in terms of model prediction and precision on the other.
Deactivation of Heavy Oil Hydroprocessing Catalysts : Jorge Ancheyta :
In most cases, the analytical techniques are the limiting step and force the choice of the lumps. Over time, thanks to the development of more efficient separation techniques and the increase in computing power, lumped models became more and more complex with a continuous increase in the number of lumps. For the catalytic cracking process, Jacob et al. For example, Christensen et al. These novel techniques will be detailed in the second part of this article. Figure 4 illustrates the evolution of the lumped kinetic models for catalytic cracking, as proposed by Weekman and Nace , Jacob et al.
Figure 4. Illustration of the evolution of the lumped kinetic models for the catalytic cracking process: a Weekman and Nace ; b Jacob et al. Despite their increasing complexity, lumped kinetic models are relatively easy to develop because the number of lumps and the number of reactions remain limited. Moreover, due to the multi-compound characteristics of the lumps, the reaction pathways are generally global with no intermediate species and the kinetic rate equations are often simple pseudo-order reactions, Langmuir approach in heterogeneous kinetics, etc. Their kinetic parameters pre-exponential factors, activation energies, adsorption constants, thermochemical constants, etc.
This simplicity allows the collection of high-speed kinetic models that require limited computing power, a feature that is very interesting for the optimization and control of petroleum processes.
Deactivation of Heavy Oil Hydroprocessing Catalysts
This explains why this type of model has been predominant in petroleum refining for over 50 years. However, despite this advantage, lumping methods also have some drawbacks that need to be kept in mind. Firstly, the lumps are often defined as ensembles of compounds with similar physicochemical properties determined by the analytical techniques but not necessarily with similar reactivities. If the thermodynamic equilibria inside the lumps are not maintained by fast intra-lump reactions, the properties of the lumps will consequently be modified during the reaction due to the transformation of these internal compounds as a function of their own reactivities.
In this case, the base hypothesis that considers that the properties of the lumps are homogeneous and constant during reaction is clearly wrong Li and Rabitz, ; Wei and Kuo, The second drawback of lumped models is that they are not associated to a molecular kinetic theory but are directly derived from experimental data coming from pilot units in most cases.
In petroleum refining, these experiments are long and expensive. Consequently, even if the theoretical formulation of lumped kinetic models is relatively quick and cheap, their parameter estimation requires more time and money.
- Children, Families and Schools: Developing Partnerships for Inclusive Education!
- Deactivation of Heavy Oil Hydroprocessing Catalysts : Fundamentals and Modeling by Jorge Ancheyta;
- Deactivation of Heavy Oil Hydroprocessing Catalysts: Fundamentals and Modeling.
- An Introduction to Number Theory 2 DVD Set with Guidebook;
- Supplementary files.
- Translate this page.
Indeed, in order for lumped models to be robust and feed independent, a wide variety of experimental data in terms of operating conditions and even more importantly, feedstock composition is needed. Moreover, with the increase in both the complexity of the kinetic models and in the number of lumps, the analyses needed to describe the mixture become more and more important, further increasing the experimental cost for the development of lumped models.
Finally, the last main drawback of lumped kinetic models is related to their relative inability to determine the physicochemical properties of the effluent from its composition and the properties of its lumps.
In practice, these models often use correlations in order to obtain an estimation of the desired product properties, but this approach is empirical and very limited. Moreover, because the properties of the lumps can change during the reaction, this estimation method remains somewhat arbitrary. The analyses required to define these lumps were very simple: elementary sulfur content for the sulfur lump, elementary nitrogen content for the nitrogen lump and MS for the other lumps.
The reaction network contained 6 overall reactions. The reactions for aromatic hydrogenation were considered to be reversible, while the reactions for hydrodesulfurization and hydrodenitrogenation were defined as being irreversible. The kinetic model was implemented in a single phase plug-flow reactor model, and contained 15 parameters, which were determined from a database of 90 experimental data points with 5 independent responses each.
This extended reaction network contained 9 overall reactions. To reduce the number of rate constants in the rate equations, the author applied the RDS approach for the various reactions on the two types of active sites: hydrogenation sites and hydrogenolysis sites. Although three full range LCO and five LCO aromatic extracts were used, the feedstock variability was still limited, as only one type of feed FCC derived feeds was used.
Further improvements concerned the introduction of thermodynamic constraints for the reversible reaction, and the use of a two-phase plug-flow reactor model based on a Grayson-Streed flash calculation.
Table of Contents
By introducing the effect of the temperature, the number of parameters was increased to 31, which were determined from a database of experimental data points with 8 independent responses each. To further improve the prediction capabilities of the kinetic model for the AGO hydrotreating process, the direct lumping approach based on the analytical capabilities was abandoned, and a new kinetic model based on a feedstock reconstruction approach see Sect. The corresponding rate equations were derived based on the presence of two types of active sites hydrogenation and hydrogenolysis using the RDS approach.
The largest extension concerned the feedstocks, as 24 different types of industrial gas oils were included: straight run gas oils from a large variety of crude oils, LCO, coker gas oils and their mixtures. This feed diversity is particularly important to confer a high degree of robustness on the model. Figure 5. For each of these fractions, an elemental analysis was performed in order to determine its composition in terms of carbon, hydrogen, sulfur, nitrogen, oxygen, nickel and vanadium.
The first kinetic scheme described the evolution of the quantity of these different fractions, with an additional specific focus for the sulfur, vanadium and nickel elements Fig. The total number of lumps for this approach was equal to The reactor was modeled as a single-phase plug-flow reactor with a Langmuir-Hinshelwood formalism to manage the adsorption of the different species on the catalyst. Subsequently, this model was generalized Le Lannic, ; Verstraete et al.
This new approach was more rigorous, but the number of lumps was increased to 40 Fig. This new kinetic model was coupled with a catalyst morphology model to account for the diffusion of asphaltenes and resins inside the catalyst and for the modification of the catalyst properties porosity, surface area, etc. A third kinetic and reaction model modified both the lumped kinetic network Fig. Figure 6.
Catalytic Hydroprocessing of Liquid Biomass for Biofuels Production
Due to the complexity of the feedstock, but also because of the limitations in computer hardware and software, classical kinetic models of industrial processes, such as petroleum refining or petrochemical processes, have traditionally been based on a lumping approach, as illustrated above. However, over the last decades, these industries have been subjected to more stringent environmental legislation and tighter product quality constraints, which are phrased in terms of molecular or atomic composition of feedstocks and process products Neurock, Novel kinetic models must therefore be able to predict the performance of processes at the molecular level.
This cannot be ensured by a lumping approach due to its limitations in describing the molecular composition throughout the entire reactor. The limitations of lumped kinetic models therefore motivated the development of more fundamental and more detailed kinetic models. Two other approaches can be distinguished to model the complex process kinetics: a mechanistic approach and a molecular approach.
The difference between these approaches resides in the level of detail at which the reaction pathways are described. In this approach, a limited number of a priori assumptions are needed and the rate parameters are more fundamental in nature. Molecular models result from an intermediate approach between the mechanistic and lumping approaches. Here, a chemical reaction system is modeled at the molecular level without including its reaction intermediates.
The reactions are viewed as molecule-to-molecule transitions and each reaction is characterized by an overall rate constant. The effects of reaction intermediates are included in the rate equations by imposing assumptions during their derivation. Both detailed kinetic approaches enable one to overcome the drawbacks related to the lumping approach, since they both retain a molecular description of the reaction system throughout the entire reactor simulation, which allows the creation of feedstock-independent models.
However, such models expect a molecular description of the feedstocks, of the reaction pathways, and of the rate equations and rate parameters. In what follows, methods to determine a molecular-level description of the feedstock are first discussed in detail and illustrated. Subsequently, the simulation of detailed reaction networks is discussed by simultaneously treating the generation and reduction of the detailed reaction networks, and the rate equations, rate parameters and reactor simulation, before illustrating examples of various applications.
The first step in the simulation of a detailed molecule-based kinetic model is to determine the molecular-level description of the feedstock. The molecular description can be obtained either by using, when possible, advanced analytical characterization techniques, or by numerically creating a molecular representation of the feedstock through composition modeling algorithms, here called molecular reconstruction methods. In this section, the existing methods of obtaining the molecular description of the feedstock will be reviewed.
Common analytical techniques used to obtain a molecular characterization of the feedstocks are MS and GC. MS consists of transforming molecules, in the gaseous state, into ion fragmentation patterns by electron bombardment. The MS technique is suitable for quantifying an individual targeted compound or a small number of compounds molecules in a mixture. GC separates volatile components of mixtures through a chromatographic column with a gaseous mobile phase by physical characteristics, such as diffusivity, adsorption, absorption and volatility.
As the GC technique is a separation technique but not an identification method, the chromatography column must be coupled to a specific detector. GC techniques have a high separation efficient, equivalent to 2.