943 resultados para improvement of Lagrangian bounds


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This survey was carried out to provide the Kainji Lake Fisheries Promotion Project (KLFPP), whose overall goal is the improvement of the standard of living of fishing communities around Kainji Lake, Nigeria, and an increase in the availability of fish to consumers, with nutritional status baseline data for long-term monitoring and evaluation of the overall project goal. In a cross-sectional survey, baseline anthropometric data was collected from 768 children, aged 3-60 months in 389 fisherfolk households around the southern sector of Kainji Lake, Nigeria. In addition, data was collected on the nutritional status and fertility of the mothers, vaccination coverage of children and child survival indicators. For control purposes, 576 children and 292 mothers from non-fishing households around Kainji Lake were likewise covered by the survey. A standardised questionnaire was used to collect relevant information, while anthropometric measurements were made using appropriate equipment. Data compilation and analysis was carried out with DATAEASE registered and EPI-INFO registered software, using NCHS reference data for the analysis of anthropometric measurements. The prevalence of stunted children in fishing households was high at 40%, while the prevalence of wasted and underweight children was likewise high at 10% and 29% respectively. Children from non-fishing households had a marginally lower prevalence of stunting, wasting and underweight with 37%, 7% and 25 % respectively, although these differences were not statistically significant. Considering the fact that the survey was carried out during a period of relative food abundance, the prevalence of wasting and underweight children is likely to be much higher during periods of food shortage. The prevalence of stunting, wasting and underweight was relatively high for children aged 3 to 23 months, suggesting an increased risk of malnutrition during this period, most likely associated with inadequate weaning practices. The prevalence of malnourishment amongst women of child-bearing age was relatively high, irrespective of occupation of the household, with an average of 11% undernourished and 6% wasted. Vaccination coverage was very low while infant and child mortality were extremely high with about 1 in 5 children dying before their fifth birthday. Based on the ethical obligation to maximise the potential benefits of the survey, recommendations for activities to improve community nutrition and health were made for communication to relevant authorities. (PDF contains 52 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This survey was carried out to provide the Kainji Lake Fisheries Promotion Project (KLFPP), whose overall goal is the improvement of the standard of living of fishing communities around Kainji Lake, Nigeria, managing the fisheries on a sustainable basis, with follow-up data for long-term monitoring and evaluation of the overall project goal. A similar survey, conducted in 1996, provided the baseline against which data from the current survey was evaluated. In a cross-sectional survey, anthropometric data was collected from 576 children aged 3-60 months in 282 fisherfolk households around the southern sector of Kainji Lake, Nigeria. In addition, data was collected on the nutritional status and fertility of the mothers, vaccination coverage of children and child survival indicators. For control purposes, 374 children and 181 mothers from non-fishing households around Kainji Lake were likewise covered by the survey. A standardised questionnaire was used to collect relevant data, while anthropometric measurements were made using appropriate equipment. Data compilation and analysis was carried out with a specially designed Microsoft Access application, using NCHS reference data for the analysis of anthropometric measurements. Statistical significance testing was done using EPI-INFO" software. The results of the follow-up survey indicate a slight increase in the percentage of stunted pre-school children in fishing households around Kainji Lake, from 40% in 1996 to 41% in 1999. This increase is however not statistically significant (p= 0.704). Over the same period, the percentage of stunted children in non-fishing households increased from 37% to 39% (p= 0.540), which is also not statistically significant. Likewise, there were no statistically significant differences between the 1996 and 1999 results for the prevalence of either wasted or underweight children in fishing households. The same applies to children from non-fishing households. In addition, vaccination coverage remains very low while infant and child mortality rates continue to be extremely high with about 1 in 5 children dying before their fifth birthday. There has been no perceptible and lasting improvement in the standard of living of fishing households over the course of the second project phase as indicated by the persistently high prevalence of stunting. The situation is the same for the control group, indicating that for the region as a whole, a number of factors beyond the immediate influence of the project continue to negatively impact on the standard of living. The results also show that the project activities have not had any negative long-term effect on the nutritional status of the beneficiaries. (PDF contains 44 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.

To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.

I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.

I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.

With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.

Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.

Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.

While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the expansion and improvement of the iterative in situ click chemistry OBOC peptide library screening technology. Previous work provided a proof-of-concept demonstration that this technique was advantageous for the production of protein-catalyzed capture (PCC) agents that could be used as drop-in replacements for antibodies in a variety of applications. Chapter 2 describes the technology development that was undertaken to optimize this screening process and make it readily available for a wide variety of targets. This optimization is what has allowed for the explosive growth of the PCC agent project over the past few years.

These technology improvements were applied to the discovery of PCC agents specific for single amino acid point mutations in proteins, which have many applications in cancer detection and treatment. Chapter 3 describes the use of a general all-chemical epitope-targeting strategy that can focus PCC agent development directly to a site of interest on a protein surface. This technique utilizes a chemically-synthesized chunk of the protein, called an epitope, substituted with a click handle in combination with the OBOC in situ click chemistry libraries in order to focus ligand development at a site of interest. Specifically, Chapter 3 discusses the use of this technique in developing a PCC agent specific for the E17K mutation of Akt1. Chapter 4 details the expansion of this ligand into a mutation-specific inhibitor, with applications in therapeutics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes a series of experimental studies of lead chalcogenide thermoelectric semiconductors, mainly PbSe. Focusing on a well-studied semiconductor and reporting good but not extraordinary zT, this thesis distinguishes itself by answering the following questions that haven’t been answered: What represents the thermoelectric performance of PbSe? Where does the high zT come from? How (and how much) can we make it better? For the first question, samples were made with highest quality. Each transport property was carefully measured, cross-verified and compared with both historical and contemporary report to overturn commonly believed underestimation of zT. For n- and p-type PbSe zT at 850 K can be 1.1 and 1.0, respectively. For the second question, a systematic approach of quality factor B was used. In n-type PbSe zT is benefited from its high-quality conduction band that combines good degeneracy, low band mass and low deformation potential, whereas zT of p-type is boosted when two mediocre valence bands converge (in band edge energy). In both cases the thermal conductivity from PbSe lattice is inherently low. For the third question, the use of solid solution lead chalcogenide alloys was first evaluated. Simple criteria were proposed to help quickly evaluate the potential of improving zT by introducing atomic disorder. For both PbTe1-xSex and PbSe1-xSx, the impacts in electron and phonon transport compensate each other. Thus, zT in each case was roughly the average of two binary compounds. In p-type Pb1-xSrxSe alloys an improvement of zT from 1.1 to 1.5 at 900 K was achieved, due to the band engineering effect that moves the two valence bands closer in energy. To date, making n-type PbSe better hasn’t been accomplished, but possible strategy is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proposed EC Water Framework Directive (WFD)incorporates some new concepts in the field of water protection. Most of these concepts rely on the use of applied ecology of water systems. The expected improvement of environmental management is very new in this context. The new WFD will allow the checking of the eco-epidemiological results of several human impacts on aquatic ecosystems, such as toxic pollution and habitat modification. This paper intends to show some consequences of the WFD in the field of ecotoxicology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this essay, three lines of evidence are developed that sturgeons in the Chesapeake Bay and elsewhere are unusually sensitive to hypoxic conditions: 1. In comparison to other fishes,sturgeons have a limited behavioral and physiological capacity to respond to hypoxia. Basal metabolism, growth, feeding rate, and survival are sensitive to changes in oxygen level, which may indicate a relatively poor ability of sturgeons to oxyregulate. 2. During summertime, temperatures >20°C amplify the effect of hypoxia on sturgeons and other fishes due to a temperature oxygen "squeeze" (Coutant 1987). In bottom waters, this interaction results in substantial reduction of habitat; in dry years, sturgeon nursery habitats in the Chesapeake Bay may be particularly reduced or even eliminated. 3. While evidence for population level effects due to hypoxia is circumstantial, there are corresponding trends between the absence of Atlantic sturgeon reproduction in estuaries like the Chesapeake Bay where summertime hypoxia predominates on a system-wide scale. Also, the recent and dramatic recovery of shortnose sturgeon in the Hudson River (4-bid increase in abundance from 1980 to1995) may have been stimulated by improvement of a large portion of the nursery habitat that was restored from hypoxia to normoxia during the period 1973-1978.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrogen is the only atom for which the Schr odinger equation is solvable. Consisting only of a proton and an electron, hydrogen is the lightest element and, nevertheless, is far from being simple. Under ambient conditions, it forms diatomic molecules H2 in gas phase, but di erent temperature and pressures lead to a complex phase diagram, which is not completely known yet. Solid hydrogen was rst documented in 1899 [1] and was found to be isolating. At higher pressures, however, hydrogen can be metallized. In 1935 Wigner and Huntington predicted that the metallization pressure would be 25 GPa [2], where molecules would disociate to form a monoatomic metal, as alkali metals that lie below hydrogen in the periodic table. The prediction of the metallization pressure turned out to be wrong: metallic hydrogen has not been found yet, even under a pressure as high as 320 GPa. Nevertheless, extrapolations based on optical measurements suggest that a metallic phase may be attained at 450 GPa [3]. The interest of material scientist in metallic hydrogen can be attributed, at least to a great extent, to Ashcroft, who in 1968 suggested that such a system could be a hightemperature superconductor [4]. The temperature at which this material would exhibit a transition from a superconducting to a non-superconducting state (Tc) was estimated to be around room temperature. The implications of such a statement are very interesting in the eld of astrophysics: in planets that contain a big quantity of hydrogen and whose temperature is below Tc, superconducting hydrogen may be found, specially at the center, where the gravitational pressure is high. This might be the case of Jupiter, whose proportion of hydrogen is about 90%. There are also speculations suggesting that the high magnetic eld of Jupiter is due to persistent currents related to the superconducting phase [5]. Metallization and superconductivity of hydrogen has puzzled scientists for decades, and the community is trying to answer several questions. For instance, what is the structure of hydrogen at very high pressures? Or a more general one: what is the maximum Tc a phonon-mediated superconductor can have [6]? A great experimental e ort has been carried out pursuing metallic hydrogen and trying to answer the questions above; however, the characterization of solid phases of hydrogen is a hard task. Achieving the high pressures needed to get the sought phases requires advanced technologies. Diamond anvil cells (DAC) are commonly used devices. These devices consist of two diamonds with a tip of small area; for this reason, when a force is applied, the pressure exerted is very big. This pressure is uniaxial, but it can be turned into hydrostatic pressure using transmitting media. Nowadays, this method makes it possible to reach pressures higher than 300 GPa, but even at this pressure hydrogen does not show metallic properties. A recently developed technique that is an improvement of DAC can reach pressures as high as 600 GPa [7], so it is a promising step forward in high pressure physics. Another drawback is that the electronic density of the structures is so low that X-ray di raction patterns have low resolution. For these reasons, ab initio studies are an important source of knowledge in this eld, within their limitations. When treating hydrogen, there are many subtleties in the calculations: as the atoms are so light, the ions forming the crystalline lattice have signi cant displacements even when temperatures are very low, and even at T=0 K, due to Heisenberg's uncertainty principle. Thus, the energy corresponding to this zero-point (ZP) motion is signi cant and has to be included in an accurate determination of the most stable phase. This has been done including ZP vibrational energies within the harmonic approximation for a range of pressures and at T=0 K, giving rise to a series of structures that are stable in their respective pressure ranges [8]. Very recently, a treatment of the phases of hydrogen that includes anharmonicity in ZP energies has suggested that relative stability of the phases may change with respect to the calculations within the harmonic approximation [9]. Many of the proposed structures for solid hydrogen have been investigated. Particularly, the Cmca-4 structure, which was found to be the stable one from 385-490 GPa [8], is metallic. Calculations for this structure, within the harmonic approximation for the ionic motion, predict a Tc up to 242 K at 450 GPa [10]. Nonetheless, due to the big ionic displacements, the harmonic approximation may not su ce to describe correctly the system. The aim of this work is to apply a recently developed method to treat anharmonicity, the stochastic self-consistent harmonic approximation (SSCHA) [11], to Cmca-4 metallic hydrogen. This way, we will be able to study the e ects of anharmonicity in the phonon spectrum and to try to understand the changes it may provoque in the value of Tc. The work is structured as follows. First we present the theoretical basis of the calculations: Density Functional Theory (DFT) for the electronic calculations, phonons in the harmonic approximation and the SSCHA. Then we apply these methods to Cmca-4 hydrogen and we discuss the results obtained. In the last chapter we draw some conclusions and propose possible future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dependences of the recording properties of LiNbO3:Fe:Mn crystals on an external electric field (applied in the recording or fixing phase of the nonvolatile holographic recording process) are numerically investigated and the optimal conditions for applying an external electric field in this two-step process of nonvolatile holographic recording are discussed in detail. Significant improvement of the photorefractive performance has been found and experimental verifications using a small external electric field are described. Moreover, direct measures relating to the dominant photovoltaic mechanism in the doubly doped LiNbO3 crystals and the unconventional grating-enhanced fixing are revealed by applying an external electric field in the recording and the fixing phases, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel multifunctional inorganic-organic photorefractive (PR) poly(N-vinyl)-3-[p-nitrophenylazolcarbazolyl-CdS nanocomposites with different molar ratios of US to poly(N-vinyl)-3-[p-nitrophenylazo]carbazolyl (PVNPAK) were synthesized via a postazo-coupling reaction and chemically hybridized approach, respectively. The nanocomposites are highly soluble and could be obtained as film-forming materials with appreciably high molecular weights and low glass transition temperature (T,) due to the flexible spacers. The PVNPAK matrix possesses a highest-occupied molecular orbital value of about -5.36 eV determined from cyclic voltammetry. Second harmonic generation (SHG) could be observed in PVNPAK film without any poling procedure and 4.7 pm/V of effective second-order nonlinear optical susceptibility is obtained. The US particles as photosensitizers had a nanoscale size in PVNPAK adopting transmission electron microscopy. The improvement of interface quality between US and polymer matrix is responsible for efficient photoinduced charge generation efficiency in the nanocomposites. An asymmetric optical energy exchange between two beams on the polymer composites PVNPAK-CdS/ECZ has been found even without an external field in two-beam coupling (TBC) experiment, and the TBC gain and diffraction efficiency of 14.26 cm(-1) and 3.4% for PVNPAK-5-CdS/ECZ, 16.43 cm(-1) and 4.4% for PVNPAK-15-CdS/ECZ were measured at a 647.1 nm wavelength, respectively.