959 resultados para DIRECT LATERAL APPROACH
Resumo:
The Court of Justice’s decision of the 16 July 2015, in Case C-83/14 CHEZ Razpredelenie Bulgaria AD v Komisia za zashtita ot diskriminatsia, is a critically important case for two main reasons. First, it represents a further step along the path of addressing ethnic discrimination against Roma communities in Europe, particularly in Bulgaria, where the case arises. Second, it provides interpretations (sometimes controversial interpretations) of core concepts in the EU antidiscrimination Directives that will be drawn on in the application of equality law well beyond Bulgaria, and well beyond the pressing problem of ethnic discrimination against Roma. This article focuses particularly on the second issue, the potentially broader implications of the case. In particular, it will ask whether the Court of Justice’s approach in CHEZ is subtly redrawing the boundaries of EU equality law in general, in particular by expanding the concept of direct discrimination, or whether the result and the approach adopted is sui generis, one depending on the particular context of the case and the fact that it involves allegations of discrimination against Roma, and therefore of limited general application.
Resumo:
In this paper, the level of dynamics, as described by the Assessment Dynamic Ratio (ADR), is measured directly through a field test on a bridge in the United Kingdom. The bridge was instrumented using fiber optic strain sensors and piezo-polymer weigh-in-motion sensors were installed in the pavement on the approach road. Field measurements of static and static-plus-dynamic strains were taken over 45 days. The results show that, while dynamic amplification is large for many loading events, these tend not to be the critical events. ADR, the allowance that should be made for dynamics in an assessment of safety, is small.
Resumo:
Typologies have represented an important tool for the development of comparative social policy research and continue to be widely used in spite of growing criticism of their ability to capture the complexity of welfare states and their internal heterogeneity. In particular, debates have focused on the presence of hybrid cases and the existence of distinct cross-national pattern of variation across areas of social policy. There is growing awareness around these issues, but empirical research often still relies on methodologies aimed at classifying countries in a limited number of unambiguous types. This article proposes a two-step approach based on fuzzy-set-ideal-type analysis for the systematic analysis of hybrids at the level of both policies (step 1) and policy configurations or combinations of policies (step 2). This approach is demonstrated by using the case of childcare policies in European economies. In the first step, parental leave policies are analysed using three methods – direct, indirect, and combinatory – to identify and describe specific hybrid forms at the level of policy analysis. In the second step, the analysis focus on the relationship between parental leave and childcare services in order to develop an overall typology of childcare policies, which clearly shows that many countries display characteristics normally associated with different types (hybrids and. Therefore, this two-step approach enhances our ability to account and make sense of hybrid welfare forms produced from tensions and contradictions within and between policies.
Resumo:
This work represents an original contribution to the methodology for ecosystem models' development as well as the rst attempt of an end-to-end (E2E) model of the Northern Humboldt Current Ecosystem (NHCE). The main purpose of the developed model is to build a tool for ecosystem-based management and decision making, reason why the credibility of the model is essential, and this can be assessed through confrontation to data. Additionally, the NHCE exhibits a high climatic and oceanographic variability at several scales, the major source of interannual variability being the interruption of the upwelling seasonality by the El Niño Southern Oscillation, which has direct e ects on larval survival and sh recruitment success. Fishing activity can also be highly variable, depending on the abundance and accessibility of the main shery resources. This context brings the two main methodological questions addressed in this thesis, through the development of an end-to-end model coupling the high trophic level model OSMOSE to the hydrodynamics and biogeochemical model ROMS-PISCES: i) how to calibrate ecosystem models using time series data and ii) how to incorporate the impact of the interannual variability of the environment and shing. First, this thesis highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria along with the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. Additionally, a new Evolutionary Algorithm designed for the calibration of stochastic models (e.g Individual Based Model) and optimized for maximum likelihood estimation has been developed and applied to the calibration of the OSMOSE model to time series data. The environmental variability is explicit in the model: the ROMS-PISCES model forces the OSMOSE model and drives potential bottom-up e ects up the foodweb through plankton and sh trophic interactions, as well as through changes in the spatial distribution of sh. The latter e ect was taken into account using presence/ absence species distribution models which are traditionally assessed through a confusion matrix and the statistical metrics associated to it. However, when considering the prediction of the habitat against time, the variability in the spatial distribution of the habitat can be summarized and validated using the emerging patterns from the shape of the spatial distributions. We modeled the potential habitat of the main species of the Humboldt Current Ecosystem using several sources of information ( sheries, scienti c surveys and satellite monitoring of vessels) jointly with environmental data from remote sensing and in situ observations, from 1992 to 2008. The potential habitat was predicted over the study period with monthly resolution, and the model was validated using quantitative and qualitative information of the system using a pattern oriented approach. The nal ROMS-PISCES-OSMOSE E2E ecosystem model for the NHCE was calibrated using our evolutionary algorithm and a likelihood approach to t monthly time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. To conclude, some potential applications of the model for shery management are presented and their limitations and perspectives discussed.
Resumo:
Les macrolactones sont des squelettes structuraux importants dans de nombreuses sphères de l’industrie chimique, en particulier dans les marchés pharmaceutiques et cosmétiques. Toutefois, la stratégie traditionnelle pour la préparation de macrolactones demeure incommode en requérant notamment l’ajout (super)stœchiométrique d’agents activateurs. Conséquemment, des quantités stœchiométriques de sous-produits sont générées; ils sont souvent toxiques, dommageables pour l’environnement et nécessitent des méthodes de purification fastidieuses afin de les éliminer. La présente thèse décrit le développement d’une macrolactonisation efficace catalysée au hafnium directement à partir de précurseurs portant un acide carboxylique et un alcool primaire, ne générant que de l’eau comme sous-produit et ne nécessitant pas de techniques d’addition lente et/ou azéotropique. Le protocole a également été adapté à la synthèse directe de macrodiolides à partir de mélanges équimolaires de diols et de diacides carboxyliques et à la synthèse de dimères tête-à-queue de seco acides. Des muscs macrocycliques ainsi que des macrolactones pertinentes à la chimie médicinale ont pu être synthétisés avec l’approche développée. Un protocole pour l’estérification directe catalysée au hafnium entre des acides carboxyliques et des alcools primaires a aussi été développé. Différentes méthodes pour la macrolactonisation catalytique directe entre des alcools secondaires et des acides carboxyliques ont été étudiées. En outre, la stratégie de séparation de phase en macrocyclisation en débit continu a été appliquée lors de la synthèse totale formelle de la macrolactone ivorenolide A. Les étapes-clés de la synthèse incluent une macrocyclisation par le couplage d’alcynes de Glaser-Hay et une réaction de métathèse d’alcènes Z-sélective.
Resumo:
Background. Indirect revascularization is a therapeutic approach in case of severe angina not suitable for percutaneous or surgical revascularization. Transmyocardial revascularization (TMR) is one of the techniques used for indirect revascularization and it allows to create transmyocardial channels by a laser energy bundle delivered on left ventricular epicardial surface. Benefits of the procedure are related mainly to the angiogenesis caused by inflammation and secondly to the destruction of the nervous fibers of the heart. Patients and method. From September 1996 up to July 1997, 14 patients (9 males – 66.7%, mean age 64.8±7.9 years) underwent TMR. All patients referred angina at rest; Canadian Angina Class was IV in 7 patients (58.3%), III in 5 (41.7%). Before the enrollment, coronarography was routinely performed to find out the feasibility of Coronary Artery Bypass Graft (CABG): 13 patients (91,6%) had coronary arteries lesions not suitable for direct revascularization; this condition was limited only to postero-lateral area in one patient submitted to combined TMR + CABG procedures. Results. Mean discharge time was 3,2±1,3 days after surgery. All patients were discharged in good clinical conditions. Perfusion thallium scintigraphy was performed in 7 patients at a mean follow-up of 4±2 months, showing in all but one an improvement of perfusion defects. Moreover an exercise treadmill improvement was observed in the same patients and all of them are in good clinical conditions, with significantly reduced use of active drugs. Conclusion. Our experience confirms that TMR is a safe and feasible procedure and it offers a therapeutic solution in case of untreatable angina. Moreover, it could be a hybrid approach for patients undergoing CABGs in case of absence of vessels suitable for surgical approach in limited areas of the heart.
Resumo:
Occupational exposure assessment can be a challenge due to several factors being the most important the costs associate and the result's dependence from the conditions at the time of sampling. Conducting a task-based exposure assessment allow defining better control measures to eliminate or reduce exposure since more easily identifies the task with higher exposure. A research study was developed to show the importance of task-based exposure assessment in four different settings (bakery, horsemanship, waste sorting and cork industry). Measurements were performed using a portable direct-reading hand-held equipment and were conducted near the workers nose during tasks performance. For each task were done measurements of approximately 5 minutes. It was possible to detect the task in each setting that was responsible for higher particles exposure allowing the priority definition regarding investments in preventive and protection measures.
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
In support of the achievement goal theory (AGT), empirical research has demonstrated psychosocial benefits of the mastery-oriented learning climate. In this study, we examined the effects of perceived coaching behaviors on various indicators of psychosocial well-being (competitive anxiety, self-esteem, perceived competence, enjoyment, and future intentions for participation), as mediated by perceptions of the coach-initiated motivational climate, achievement goal orientations and perceptions of sport-specific skills efficacy. Using a pre-post test design, 1,464 boys, ages 10-15 (M = 12.84 years, SD = 1.44), who participated in a series of 12 football skills clinics were surveyed from various locations across the United States. Using structural equation modeling (SEM) path analysis and hierarchical regression analysis, the cumulative direct and indirect effects of the perceived coaching behaviors on the psychosocial variables at post-test were parsed out to determine what types of coaching behaviors are more conducive to the positive psychosocial development of youth athletes. The study demonstrated that how coaching behaviors are perceived impacts the athletes’ perceptions of the motivational climate and achievement goal orientations, as well as self-efficacy beliefs. These effects in turn affect the athletes’ self-esteem, general competence, sport-specific competence, competitive anxiety, enjoyment, and intentions to remain involved in the sport. The findings also clarify how young boys internalize and interpret coaches’ messages through modification of achievement goal orientations and sport-specific efficacy beliefs.
Resumo:
Strigolactones are a group of plant compounds of diverse but related chemical structures. They have similar bioactivity across a broad range of plant species, act to optimize plant growth and development, and promote soil microbe interactions. Carlactone, a common precursor to strigolactones, is produced by conserved enzymes found in a number of diverse species. Versions of the MORE AXILLARY GROWTH1 (MAX1) cytochrome P450 from rice and Arabidopsis thaliana make specific subsets of strigolactones from carlactone. However, the diversity of natural strigolactones suggests that additional enzymes are involved and remain to be discovered. Here, we use an innovative method that has revealed a missing enzyme involved in strigolactone metabolism. By using a transcriptomics approach involving a range of treatments that modify strigolactone biosynthesis gene expression coupled with reverse genetics, we identified LATERAL BRANCHING OXIDOREDUCTASE (LBO), a gene encoding an oxidoreductase-like enzyme of the 2-oxoglutarate and Fe(II)-dependent dioxygenase superfamily. Arabidopsis lbo mutants exhibited increased shoot branching, but the lbo mutation did not enhance the max mutant phenotype. Grafting indicated that LBO is required for a graft-transmissible signal that, in turn, requires a product of MAX1. Mutant lbo backgrounds showed reduced responses to carlactone, the substrate of MAX1, and methyl carlactonoate (MeCLA), a product downstream of MAX1. Furthermore, lbo mutants contained increased amounts of these compounds, and the LBO protein specifically converts MeCLA to an unidentified strigolactone-like compound. Thus, LBO function may be important in the later steps of strigolactone biosynthesis to inhibit shoot branching in Arabidopsis and other seed plants.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.
Resumo:
During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.
Resumo:
Beef businesses in northern Australia are facing increased pressure to be productive and profitable with challenges such as climate variability and poor financial performance over the past decade. Declining terms of trade, limited recent gains in on-farm productivity, low profit margins under current management systems and current climatic conditions will leave little capacity for businesses to absorb climate change-induced losses. In order to generate a whole-of-business focus towards management change, the Climate Clever Beef project in the Maranoa-Balonne region of Queensland trialled the use of business analysis with beef producers to improve financial literacy, provide a greater understanding of current business performance and initiate changes to current management practices. Demonstration properties were engaged and a systematic approach was used to assess current business performance, evaluate impacts of management changes on the business and to trial practices and promote successful outcomes to the wider industry. Focus was concentrated on improving financial literacy skills, understanding the business’ key performance indicators and modifying practices to improve both business productivity and profitability. To best achieve the desired outcomes, several extension models were employed: the ‘group facilitation/empowerment model’, the ‘individual consultant/mentor model’ and the ‘technology development model’. Providing producers with a whole-of-business approach and using business analysis in conjunction with on-farm trials and various extension methods proved to be a successful way to encourage producers in the region to adopt new practices into their business, in the areas of greatest impact. The areas targeted for development within businesses generally led to improvements in animal performance and grazing land management further improving the prospects for climate resilience.
Resumo:
During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.
Resumo:
During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.