914 resultados para Residual autocorrelation and autocovariance matrices


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To compare monochromatic aberrations of keratoconic eyes when uncorrected, corrected with spherically-powered RGP (rigid gas-permeable) contact lenses and corrected using simulations of customised soft contact lenses for different magnitudes of rotation (up to 15°) and translation (up to 1mm) from their ideal position. Methods: The ocular aberrations of examples of mild, moderate and severe keratoconic eyes were measured when uncorrected and when wearing their habitual RGP lenses. Residual aberrations and point-spread functions of each eye were simulated using an ideal, customised soft contact lens (designed to neutralise higher-order aberrations, HOA) were calculated as a function of the angle of rotation of the lens from its ideal orientation, and its horizontal and vertical translation. Results: In agreement with the results of other authors, the RGP lenses markedly reduced both lower-order aberrations and HOA for all three patients. When compared with the RGP lens corrections, the customised lens simulations only provided optical improvements if their movements were constrained within limits which appear to be difficult to achieve with current technologies. Conclusions: At the present time, customised contact lens corrections appear likely to offer, at best, only minor optical improvements over RGP lenses for patients with keratoconus. If made in soft materials, however, these lenses may be preferred by patients in term of comfort. © 2012 The College of Optometrists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary research question was: What is the nature and degree of alignment between the tenets of learning organizations and the policies and practices of a community college concerning adjunct instructors? I investigated the employment experiences of 8 adjunct instructors at a large community college in the Southeastern U.S. to (a) describe and explain the perspectives of the adjuncts, (b) describe and explain my own adjunct employment experience at the same college, (c) determine how the adjunct policies and practices collectively encountered were congruent with or at variance with the tenets of learning organizations, and (d) to use this framework to support recommendations that may help the college achieve more favorable alignment with these tenets. ^ Data on perceived adjunct policies and practices were reduced into 11 categories and, using matrices, were compared with 5 major categories of learning organization tenets. The 5 categories of tenets were: (a) inputs, (b) information flow/communication, (c) employee inclusion/value, (d) teamwork, and (e) facilitation of change. The 11 categories of the college's policies and practices were (a) becoming an adjunct, (b) full-time employment aspirations, (c) salary, (d) benefits, (e) job security and predictability, (f) job satisfaction, (g) respect, (h) support services, (i) professional development, (j) institutional inclusion, and (k) future role of adjuncts. The reflective journal component relied on a 5-year (1995–2000) personal and professional journal maintained by me during employment with the same college as the participants. ^ Findings indicate that the college's adjunct policies and practices were most incongruent with 25 of the 70 learning organization tenets. These incongruencies spanned the 5 categories, although most occurred in the Employee/Inclusion/Value category. Adjunct instructors wanted inclusion, respect, value, trust, and empowerment in decision making processes that affect adjunct policies and practices of the college, but did not perceive this to be a part of the present situation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An increase in the demand for the freight shipping in the United States has been predicted for the near future and Longer Combination Vehicles (LCVs), which can carry more loads in each trip, seem like a good solution for the problem. Currently, utilizing LCVs is not permitted in most states of the US and little research has been conducted on the effects of these heavy vehicles on the roads and bridges. In this research, efforts are made to study these effects by comparing the dynamic and fatigue effects of LCVs with more common trucks. Ten Steel and prestressed concrete bridges with span lengths ranging from 30’ to 140’ are designed and modeled using the grid system in MATLAB. Additionally, three more real bridges including two single span simply supported steel bridges and a three span continuous steel bridge are modeled using the same MATLAB code. The equations of motion of three LCVs as well as eight other trucks are derived and these vehicles are subjected to different road surface conditions and bumps on the roads and the designed and real bridges. By forming the bridge equations of motion using the mass, stiffness and damping matrices and considering the interaction between the truck and the bridge, the differential equations are solved using the ODE solver in MATLAB and the results of the forces in tires as well as the deflections and moments in the bridge members are obtained. The results of this study show that for most of the bridges, LCVs result in the smallest values of Dynamic Amplification Factor (DAF) whereas the Single Unit Trucks cause the highest values of DAF when traveling on the bridges. Also in most cases, the values of DAF are observed to be smaller than the 33% threshold suggested by the design code. Additionally, fatigue analysis of the bridges in this study confirms that by replacing the current truck traffic with higher capacity LCVs, in most cases, the remaining fatigue life of the bridge is only slightly decreased which means that taking advantage of these larger vehicles can be a viable option for decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring multiple myeloma patients for relapse requires sensitive methods to measure minimal residual disease and to establish a more precise prognosis. The present study aimed to standardize a real-time quantitative polymerase chain reaction (PCR) test for the IgH gene with a JH consensus self-quenched fluorescence reverse primer and a VDJH or DJH allele-specific sense primer (self-quenched PCR). This method was compared with allele-specific real-time quantitative PCR test for the IgH gene using a TaqMan probe and a JH consensus primer (TaqMan PCR). We studied nine multiple myeloma patients from the Spanish group treated with the MM2000 therapeutic protocol. Self-quenched PCR demonstrated sensitivity of >or=10(-4) or 16 genomes in most cases, efficiency was 1.71 to 2.14, and intra-assay and interassay reproducibilities were 1.18 and 0.75%, respectively. Sensitivity, efficiency, and residual disease detection were similar with both PCR methods. TaqMan PCR failed in one case because of a mutation in the JH primer binding site, and self-quenched PCR worked well in this case. In conclusion, self-quenched PCR is a sensitive and reproducible method for quantifying residual disease in multiple myeloma patients; it yields similar results to TaqMan PCR and may be more effective than the latter when somatic mutations are present in the JH intronic primer binding site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden optische Filterarrays für hochqualitative spektroskopische Anwendungen im sichtbaren (VIS) Wellenlängenbereich untersucht. Die optischen Filter, bestehend aus Fabry-Pérot (FP)-Filtern für hochauflösende miniaturisierte optische Nanospektrometer, basieren auf zwei hochreflektierenden dielektrischen Spiegeln und einer zwischenliegenden Resonanzkavität aus Polymer. Jeder Filter erlaubt einem schmalbandigem spektralen Band (in dieser Arbeit Filterlinie genannt) ,abhängig von der Höhe der Resonanzkavität, zu passieren. Die Effizienz eines solchen optischen Filters hängt von der präzisen Herstellung der hochselektiven multispektralen Filterfelder von FP-Filtern mittels kostengünstigen und hochdurchsatz Methoden ab. Die Herstellung der multiplen Spektralfilter über den gesamten sichtbaren Bereich wird durch einen einzelnen Prägeschritt durch die 3D Nanoimprint-Technologie mit sehr hoher vertikaler Auflösung auf einem Substrat erreicht. Der Schlüssel für diese Prozessintegration ist die Herstellung von 3D Nanoimprint-Stempeln mit den gewünschten Feldern von Filterkavitäten. Die spektrale Sensitivität von diesen effizienten optischen Filtern hängt von der Genauigkeit der vertikalen variierenden Kavitäten ab, die durch eine großflächige ‚weiche„ Nanoimprint-Technologie, UV oberflächenkonforme Imprint Lithographie (UV-SCIL), ab. Die Hauptprobleme von UV-basierten SCIL-Prozessen, wie eine nichtuniforme Restschichtdicke und Schrumpfung des Polymers ergeben Grenzen in der potenziellen Anwendung dieser Technologie. Es ist sehr wichtig, dass die Restschichtdicke gering und uniform ist, damit die kritischen Dimensionen des funktionellen 3D Musters während des Plasmaätzens zur Entfernung der Restschichtdicke kontrolliert werden kann. Im Fall des Nanospektrometers variieren die Kavitäten zwischen den benachbarten FP-Filtern vertikal sodass sich das Volumen von jedem einzelnen Filter verändert , was zu einer Höhenänderung der Restschichtdicke unter jedem Filter führt. Das volumetrische Schrumpfen, das durch den Polymerisationsprozess hervorgerufen wird, beeinträchtigt die Größe und Dimension der gestempelten Polymerkavitäten. Das Verhalten des großflächigen UV-SCIL Prozesses wird durch die Verwendung von einem Design mit ausgeglichenen Volumen verbessert und die Prozessbedingungen werden optimiert. Das Stempeldesign mit ausgeglichen Volumen verteilt 64 vertikal variierenden Filterkavitäten in Einheiten von 4 Kavitäten, die ein gemeinsames Durchschnittsvolumen haben. Durch die Benutzung der ausgeglichenen Volumen werden einheitliche Restschichtdicken (110 nm) über alle Filterhöhen erhalten. Die quantitative Analyse der Polymerschrumpfung wird in iii lateraler und vertikaler Richtung der FP-Filter untersucht. Das Schrumpfen in vertikaler Richtung hat den größten Einfluss auf die spektrale Antwort der Filter und wird durch die Änderung der Belichtungszeit von 12% auf 4% reduziert. FP Filter die mittels des Volumengemittelten Stempels und des optimierten Imprintprozesses hergestellt wurden, zeigen eine hohe Qualität der spektralen Antwort mit linearer Abhängigkeit zwischen den Kavitätshöhen und der spektralen Position der zugehörigen Filterlinien.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cheddar cheese was made using control culture (Lactococcus lactis subsp. lactis), or with control culture plus a galactose-metabolising (Gal+) or galactose-non-metabolising (Gal-) Streptococcus thermophilus adjunct; for each culture type, the pH at whey drainage was either low (pH 6.15) or high (pH 6.45). Sc. thermophilus affected the levels of residual lactose and galactose, and the volatile compound profile and sensory properties of the mature cheese (270 d) to an extent dependent on the drain pH and phenotype (Gal+ or Gal-). For all culture systems, reducing drain pH resulted in lower levels of moisture and lactic acid, a higher concentration of free amino acids, and higher firmness. The results indicate that Sc. thermophilus may be used to diversify the sensory properties of Cheddar cheese, for example from a fruity buttery odour and creamy flavour to a more acid taste, rancid odour, and a sweaty cheese flavour at high drain pH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neodymium isotopic compositions (εNd) have been largely used for the last fifty years as a tracer of past ocean circulation, and more intensively during the last decade to investigate ocean circulation during the Cretaceous period. Despite a growing set of data, circulation patterns still remain unclear during this period. In particular, the identification of the deep-water masses and their spatial extension within the different oceanic basins are poorly constrained. In this study we present new deep-water εNd data inferred from the Nd isotope composition of fish remains and Fe-Mn oxyhydroxide coatings on foraminifera tests, along with new εNd data of residual (partly detrital) fraction recovered from DSDP sites 152 (Nicaraguan Rise), 258 (Naturaliste Plateau), 323 (Bellinghausen Abyssal Plain), and ODP sites 690 (Maud Rise) and 700 (East Georgia Basin, South Atlantic). The presence of abundant authigenic minerals in the sediments at sites 152 and 690 detected by XRD analyses may explain both middle rare earth element enrichments in the spectra of the residual fraction and the evolution of residual fraction εNd that mirror that of the bottom waters at the two sites. The results point towards a close correspondence between the bottom water εNd values of sites 258 and 700 from the late Turonian to the Santonian. Since the deep-water Nd isotope values at these two sites are also similar to those at other proto-Indian sites, we propose the existence of a common intermediate to deep-water water mass as early as the mid-Cretaceous. The water mass would have extended from the central part of the South Atlantic to the eastern part of proto-Indian ocean sites, beyond the Kerguelen Plateau. Furthermore, data from south and north of the Rio Grande Rise-Walvis Ridge complex (sites 700 and 530) are indistinguishable from the Turonian to Campanian, suggesting a common water mass since the Turonian at least. This view is supported by a reconstruction of the Rio Grande Rise-Walvis Ridge complex during the Turonian, highlighting the likely existence of a deep breach between the Rio Grande Rise and the proto-Walvis Ridge at that time. Thus deep-water circulation may have been possible between the different austral basins as early as the Turonian, despite the presence of potential oceanic barriers. Comparison of new seawater and residue εNd data on Nicaraguan Rise suggest a westward circulation of intermediate waters through the Caribbean Seaway during the Maastrichtian and Paleocene from the North Atlantic to the Pacific. This westward circulation reduced the Pacific water influence in the Atlantic, and was likely responsible for more uniform, less radiogenic εNd values in the North Atlantic after 80 Ma. Additionally, our data document an increasing trend observed in several oceanic basins during the Maastrichtian and the Paleocene, which is more pronounced in the North Pacific. Although the origin of this increase still remains unclear, it might be explained by an increase in the contribution of radiogenic material to upper ocean waters in the northern Pacific. By sinking to depth, these waters may have redistributed to some extent more radiogenic signatures to other ocean basins through deep-water exchanges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tomato (Lycopersicon esculentum Mill.) is the second most important vegetable crop worldwide and a rich source of hydrophilic (H) and lipophilic (L) antioxidants. The H fraction is constituted mainly by ascorbic acid and soluble phenolic compounds, while the L fraction contains carotenoids (mostly lycopene), tocopherols, sterols and lipophilic phenolics [1,2]. To obtain these antioxidants it is necessary to follow appropriate extraction methods and processing conditions. In this regard, this study aimed at determining the optimal extraction conditions for H and L antioxidants from a tomato surplus. A 5-level full factorial design with 4 factors (extraction time (I, 0-20 min), temperature (T, 60-180 •c), ethanol percentage (Et, 0-100%) and solid/liquid ratio (S/L, 5-45 g!L)) was implemented and the response surface methodology used for analysis. Extractions were carried out in a Biotage Initiator Microwave apparatus. The concentration-time response methods of crocin and P-carotene bleaching were applied (using 96-well microplates), since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively [3]. Measurements were carried out at intervals of 3, 5 and 10 min (initiation, propagation and asymptotic phases), during a time frame of 200 min. The parameters Pm (maximum protected substrate) and V m (amount of protected substrate per g of extract) and the so called IC50 were used to quantify the response. The optimum extraction conditions were as follows: r~2.25 min, 7'=149.2 •c, Et=99.1 %and SIL=l5.0 giL for H antioxidants; and t=l5.4 min, 7'=60.0 •c, Et=33.0% and S/L~l5.0 g/L for L antioxidants. The proposed model was validated based on the high values of the adjusted coefficient of determination (R2.wi>0.91) and on the non-siguificant differences between predicted and experimental values. It was also found that the antioxidant capacity of the H fraction was much higher than the L one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tomato is the second most important vegetable crop worldwide and a rich source of industrially interesting antioxidants. Hence, the microwave-assisted extraction of hydrophilic (H) and lipophilic (L) antioxidants from a surplus tomato crop was optimized using response surface methodology. The relevant independent variables were temperature (T), extraction time (t), ethanol concentration (Et) and solid/liquid ratio (S/L). The concentration-time response methods of crocin and β-carotene bleaching were applied, since they are suitable in vitro assays to evaluate the antioxidant activity of H and L matrices, respectively. The optimum operating conditions that maximized the extraction were as follows: t, 2.25 min; T, 149.2 ºC; Et, 99.1 %; and S/L, 45.0 g/L for H antioxidants; and t, 15.4 min; T, 60.0 ºC; Et, 33.0 %; and S/L, 15.0 g/L for L antioxidants. This industrial approach indicated that surplus tomatoes possess a high content of antioxidants, offering an alternative source for obtaining natural value-added compounds. Additionally, by testing the relationship between the polarity of the extraction solvent and the antioxidant activity of the extracts in H and L media (polarity-activity relationship), useful information for the study of complex natural extracts containing components with variable degrees of polarity was obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado Vinifera Euromaster - Instituto Superior de Agronomia - UL

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Epidemiological studies show that high circulating cystatin C is associated with risk of cardiovascular disease (CVD), independent of creatinine-based renal function measurements. It is unclear whether this relationship is causal, arises from residual confounding, and/or is a consequence of reverse causation. OBJECTIVES: The aim of this study was to use Mendelian randomization to investigate whether cystatin C is causally related to CVD in the general population. METHODS We incorporated participant data from 16 prospective cohorts (n ¼ 76,481) with 37,126 measures of cystatin C and added genetic data from 43 studies (n ¼ 252,216) with 63,292 CVD events. We used the common variant rs911119 in CST3 as an instrumental variable to investigate the causal role of cystatin C in CVD, including coronary heart disease, ischemic stroke, and heart failure. RESULTS: Cystatin C concentrations were associated with CVD risk after adjusting for age, sex, and traditional risk factors (relative risk: 1.82 per doubling of cystatin C; 95% confidence interval [CI]: 1.56 to 2.13; p ¼ 2.12 1014). The minor allele of rs911119 was associated with decreased serum cystatin C (6.13% per allele; 95% CI: 5.75 to 6.50; p ¼ 5.95 10211), explaining 2.8% of the observed variation in cystatin C. Mendelian randomization analysis did not provide evidence for a causal role of cystatin C, with a causal relative risk for CVD of 1.00 per doubling cystatin C (95% CI: 0.82 to 1.22; p ¼ 0.994), which was statistically different from the observational estimate (p ¼ 1.6 105 ). A causal effect of cystatin C was not detected for any individual component of CVD. CONCLUSIONS: Mendelian randomization analyses did not support a causal role of cystatin C in the etiology of CVD. As such, therapeutics targeted at lowering circulating cystatin C are unlikely to be effective in preventing CVD. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durability issues of reinforced concrete construction cost millions of dollars in repair or demolition. Identification of the causes of degradation and a prediction of service life based on experience, judgement and local knowledge has limitations in addressing all the associated issues. The objective of this CRC CI research project is to develop a tool that will assist in the interpretation of the symptoms of degradation of concrete structures, estimate residual capacity and recommend cost effective solutions. This report is a documentation of the research undertaken in connection with this project. The primary focus of this research is centred on the case studies provided by Queensland Department of Main Roads (QDMR) and Brisbane City Council (BCC). These organisations are endowed with the responsibility of managing a huge volume of bridge infrastructure in the state of Queensland, Australia. The main issue to be addressed in managing these structures is the deterioration of bridge stock leading to a reduction in service life. Other issues such as political backlash, public inconvenience, approach land acquisitions are crucial but are not within the scope of this project. It is to be noted that deterioration is accentuated by aggressive environments such as salt water, acidic or sodic soils. Carse, 2005, has noted that the road authorities need to invest their first dollars in understanding their local concretes and optimising the durability performance of structures and then look at potential remedial strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the process adopted in developing an integrated decision support framework for planning of office building refurbishment projects, with specific emphasize on optimising rentable floor space, structural strengthening, residual life and sustainability. Expert opinion on the issues to be considered in a tool is being captured through the DELPHI process, which is currently ongoing. The methodology for development of the integrated tool will be validated through decisions taken during a case study project: refurbishment of CH1 building of Melbourne City Council, which will be followed through to completion by the research team. Current status of the CH1 planning will be presented in the context of the research project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.