978 resultados para radial distribution functions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many organic compounds cause an irreversible damage to human health and the ecosystem and are present in water resources. Among these hazard substances, phenolic compounds play an important role on the actual contamination. Utilization of membrane technology is increasing exponentially in drinking water production and waste water treatment. The removal of organic compounds by nanofiltration membranes is characterized not only by molecular sieving effects but also by membrane-solute interactions. Influence of the sieving parameters (molecular weight and molecular diameter) and the physicochemical interactions (dissociation constant and molecular hydrophobicity) on the membrane rejection of the organic solutes were studied. The molecular hydrophobicity is expressed as logarithm of octanol-water partition coefficient. This paper proposes a method used that can be used for symbolic knowledge extraction from a trained neural network, once they have been trained with the desired performance and is based on detect the more important variables in problems where exist multicolineality among the input variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Palladium nanoparticles have been immobilized into an amino-functionalized metal-organic framework (MOF), MIL-101Cr-NH2, to form Pd@MIL-101Cr-NH2. Four materials with different loadings of palladium have been prepared (denoted as 4-, 8-, 12-, and 16wt%Pd@MIL-101Cr-NH2). The effects of catalyst loading and the size and distribution of the Pd nanoparticles on the catalytic performance have been studied. The catalysts were characterized by using scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier-transform infrared (FTIR) spectroscopy, powder X-ray diffraction (PXRD), N-2-sorption isotherms, elemental analysis, and thermogravimetric analysis (TGA). To better characterize the palladium nanoparticles and their distribution in MIL-101Cr-NH2, electron tomography was employed to reconstruct the 3D volume of 8wt%Pd@MIL-101Cr-NH2 particles. The pair distribution functions (PDFs) of the samples were extracted from total scattering experiments using high-energy X-rays (60keV). The catalytic activity of the four MOF materials with different loadings of palladium nanoparticles was studied in the Suzuki-Miyaura cross-coupling reaction. The best catalytic performance was obtained with the MOF that contained 8wt% palladium nanoparticles. The metallic palladium nanoparticles were homogeneously distributed, with an average size of 2.6nm. Excellent yields were obtained for a wide scope of substrates under remarkably mild conditions (water, aerobic conditions, room temperature, catalyst loading as low as 0.15mol%). The material can be recycled at least 10times without alteration of its catalytic properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject of quark transverse spin and transverse momentum distribution are two current research frontier in understanding the spin structure of the nucleons. The goal of the research reported in this dissertation is to extract new information on the quark transversity distribution and the novel transverse-momentum-dependent Sivers function in the neutron. A semi-inclusive deep inelastic scattering experiment was performed at the Hall A of the Jefferson laboratory using 5.9 GeV electron beam and a transversely polarized ^{3}He target. The scattered electrons and the produced hadrons (pions, kaons, and protons) were detected in coincidence with two large magnetic spectrometers. By regularly flipping the spin direction of the transversely polarized target, the single-spin-asymmetry (SSA) of the semi-inclusive deep inelastic reaction ^{3}He^{uparrow}(e,e'h^{\pm})X was measured over the kinematic range 0.13 < x < 0.41 and 1.3 < Q^{2} < 3.1 (GeV)^{2}. The SSA contains several different azimuthal angular modulations which are convolutions of quarks distribution functions in the nucleons and the quark fragmentation functions into hadrons. It is from the extraction of the various ``moments'' of these azimuthal angular distributions (Collins moment and Sivers moment) that we obtain information on the quark transversity distribution and the novel T-odd Sivers function. In this dissertation, I first introduced the theoretical background and experimental status of nucleon spins and the physics of SSA. I will then present the experimental setup and data collection of the JLab E06-010 experiment. Details of data analysis will be discussed next with emphasis on the kaon particle identification and the Ring-Imaging Cherenkov detector which are my major responsibilities in this experiment. Finally, results on the kaon Collins and Sivers moments extracted from the Maximum Likelihood method will be presented and interpreted. I will conclude with a discussion on the future prospects for this research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we construct a model for the simultaneous compaction by which clusters are restructured, and growth of clusters by pairwise coagulation. The model has the form of a multicomponent aggregation problem in which the components are cluster mass and cluster diameter. Following suitable approximations, exact explicit solutions are derived which may be useful for the verification of simulations of such systems. Numerical simulations are presented to illustrate typical behaviour and to show the accuracy of approximations made in deriving the model. The solutions are then simplified using asymptotic techniques to show the relevant timescales of the kinetic processes and elucidate the shape of the cluster distribution functions at large times.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate key characteristics of Ca²⁺ puffs in deterministic and stochastic frameworks that all incorporate the cellular morphology of IP[subscript]3 receptor channel clusters. In a first step, we numerically study Ca²⁺ liberation in a three dimensional representation of a cluster environment with reaction-diffusion dynamics in both the cytosol and the lumen. These simulations reveal that Ca²⁺ concentrations at a releasing cluster range from 80 µM to 170 µM and equilibrate almost instantaneously on the time scale of the release duration. These highly elevated Ca²⁺ concentrations eliminate Ca²⁺ oscillations in a deterministic model of an IP[subscript]3R channel cluster at physiological parameter values as revealed by a linear stability analysis. The reason lies in the saturation of all feedback processes in the IP[subscript]3R gating dynamics, so that only fluctuations can restore experimentally observed Ca²⁺ oscillations. In this spirit, we derive master equations that allow us to analytically quantify the onset of Ca²⁺ puffs and hence the stochastic time scale of intracellular Ca²⁺ dynamics. Moving up the spatial scale, we suggest to formulate cellular dynamics in terms of waiting time distribution functions. This approach prevents the state space explosion that is typical for the description of cellular dynamics based on channel states and still contains information on molecular fluctuations. We illustrate this method by studying global Ca²⁺ oscillations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given a 2manifold triangular mesh \(M \subset {\mathbb {R}}^3\), with border, a parameterization of \(M\) is a FACE or trimmed surface \(F=\{S,L_0,\ldots, L_m\}\) -- \(F\) is a connected subset or region of a parametric surface \(S\), bounded by a set of LOOPs \(L_0,\ldots ,L_m\) such that each \(L_i \subset S\) is a closed 1manifold having no intersection with the other \(L_j\) LOOPs -- The parametric surface \(S\) is a statistical fit of the mesh \(M\) -- \(L_0\) is the outermost LOOP bounding \(F\) and \(L_i\) is the LOOP of the ith hole in \(F\) (if any) -- The problem of parameterizing triangular meshes is relevant for reverse engineering, tool path planning, feature detection, redesign, etc -- Stateofart mesh procedures parameterize a rectangular mesh \(M\) -- To improve such procedures, we report here the implementation of an algorithm which parameterizes meshes \(M\) presenting holes and concavities -- We synthesize a parametric surface \(S \subset {\mathbb {R}}^3\) which approximates a superset of the mesh \(M\) -- Then, we compute a set of LOOPs trimming \(S\), and therefore completing the FACE \(F=\ {S,L_0,\ldots ,L_m\}\) -- Our algorithm gives satisfactory results for \(M\) having low Gaussian curvature (i.e., \(M\) being quasi-developable or developable) -- This assumption is a reasonable one, since \(M\) is the product of manifold segmentation preprocessing -- Our algorithm computes: (1) a manifold learning mapping \(\phi : M \rightarrow U \subset {\mathbb {R}}^2\), (2) an inverse mapping \(S: W \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^3\), with \ (W\) being a rectangular grid containing and surpassing \(U\) -- To compute \(\phi\) we test IsoMap, Laplacian Eigenmaps and Hessian local linear embedding (best results with HLLE) -- For the back mapping (NURBS) \(S\) the crucial step is to find a control polyhedron \(P\), which is an extrapolation of \(M\) -- We calculate \(P\) by extrapolating radial basis functions that interpolate points inside \(\phi (M)\) -- We successfully test our implementation with several datasets presenting concavities, holes, and are extremely nondevelopable -- Ongoing work is being devoted to manifold segmentation which facilitates mesh parameterization

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado, Engenharia Eletrónica e Telecomunicações, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2016

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The change in the carbonaceous skeleton of nanoporous carbons during their activation has received limited attention, unlike its counterpart process in the presence of an inert atmosphere. Here we adopt a multi-method approach to elucidate this change in a poly(furfuryl alcohol)-derived carbon activated using cyclic application of oxygen saturation at 250 °C before its removal (with carbon) at 800 °C in argon. The methods used include helium pycnometry, synchrotron-based X-ray diffraction (XRD) and associated radial distribution function (RDF) analysis, transmission electron microscopy (TEM) and, uniquely, electron energy-loss spectroscopy spectrum-imaging (EELS-SI), electron nanodiffraction and fluctuation electron microscopy (FEM). Helium pycnometry indicates the solid skeleton of the carbon densifies during activation from 78% to 93% of graphite. RDF analysis, EELS-SI, and FEM all suggest this densification comes through an in-plane growth of sp2 carbon out to the medium range without commensurate increase in order normal to the plane. This process could be termed ‘graphenization’. The exact way in which this process occurs is not clear, but TEM images of the carbon before and after activation suggest it may come through removal of the more reactive carbon, breaking constraining cross-links and creating space that allows the remaining carbon material to migrate in an annealing-like process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The velocity function (VF) is a fundamental observable statistic of the galaxy population that is similar to the luminosity function in importance, but much more difficult to measure. In this work we present the first directly measured circular VF that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low-mass galaxy population (60 < v_circ < 170 km s^-1), we use the HI Parkes All Sky Survey VF. For the massive galaxy population (170 < v_circ < 320 km s^-1), we use stellar circular velocities from the Calar Alto Legacy Integral Field Area Survey (CALIFA). In earlier work we obtained the measurements of circular velocity at the 80% light radius for 226 galaxies and demonstrated that the CALIFA sample can produce volume-corrected galaxy distribution functions. The CALIFA VF includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies and has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both VFs can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed VF is compared to VFs derived from cosmological simulations of the z = 0 galaxy population. We find that dark-matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic simulations fare better, but still do not fully reproduce observations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a study of globular clusters (GCs), based on analysis of Monte Carlo simulations of globular clusters (GCs) with the aim to define new empirical parameters measurable from observations and able to trace the different phases of their dynamical evolution history. During their long term dynamical evolution, due to mass segregation and and dynamical friction, massive stars transfer kinetic energy to lower-mass objects, causing them to sink toward the cluster center. This continuous transfer of kinetic energy from the core to the outskirts triggers the runaway contraction of the core, known as "core collapse" (CC), followed by episodes of expansion and contraction called gravothermal oscillations. Clearly, such an internal dynamical evolution corresponds to significant variations also of the structure of the system. Determining the dynamical age of a cluster can be challenging as it depends on various internal and external properties. The traditional classification of GCs as CC or post-CC systems relies on detecting a steep power-law cusp in the central density profile, which may not always be reliable due to post-CC oscillations or other processes. In this thesis, based on the normalized cumulative radial distribution (nCRD) within a fraction of the half-mass radius is analyzed, and three diagnostics (A5, P5, and S2.5) are defined. These diagnostics show sensitivity to dynamical evolution and can distinguish pre-CC clusters from post-CC clusters.The analysis performed using multiple simulations with different initial conditions, including varying binary fractions and the presence of dark remnants showed the time variations of the diagnostics follow distinct patterns depending on the binary fraction and the retention or ejection of black holes. This analysis is extended to a larger set of simulations matching the observed properties of Galactic GCs, and the parameters show a potential to distinguish the dynamical stages of the observed clusters as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.