985 resultados para impact parameter
Resumo:
Multi-parametric and quantitative magnetic resonance imaging (MRI) techniques have come into the focus of interest, both as a research and diagnostic modality for the evaluation of patients suffering from mild cognitive decline and overt dementia. In this study we address the question, if disease related quantitative magnetization transfer effects (qMT) within the intra- and extracellular matrices of the hippocampus may aid in the differentiation between clinically diagnosed patients with Alzheimer disease (AD), patients with mild cognitive impairment (MCI) and healthy controls. We evaluated 22 patients with AD (n=12) and MCI (n=10) and 22 healthy elderly (n=12) and younger (n=10) controls with multi-parametric MRI. Neuropsychological testing was performed in patients and elderly controls (n=34). In order to quantify the qMT effects, the absorption spectrum was sampled at relevant off-resonance frequencies. The qMT-parameters were calculated according to a two-pool spin-bath model including the T1- and T2 relaxation parameters of the free pool, determined in separate experiments. Histograms (fixed bin-size) of the normalized qMT-parameter values (z-scores) within the anterior and posterior hippocampus (hippocampal head and body) were subjected to a fuzzy-c-means classification algorithm with downstreamed PCA projection. The within-cluster sums of point-to-centroid distances were used to examine the effects of qMT- and diffusion anisotropy parameters on the discrimination of healthy volunteers, patients with Alzheimer and MCIs. The qMT-parameters T2(r) (T2 of the restricted pool) and F (fractional pool size) differentiated between the three groups (control, MCI and AD) in the anterior hippocampus. In our cohort, the MT ratio, as proposed in previous reports, did not differentiate between MCI and AD or healthy controls and MCI, but between healthy controls and AD.
Resumo:
Bisher wurde noch nicht erforscht wie sich die Variation kinematischer Parameter konkret auf den Energiebedarf von Regalbediengeräten auswirkt. Dieser Beitrag untersucht daher die energetischen Folgen bei unterschiedlichen Geschwindigkeiten und Beschleunigungen der Antriebe eines Regalbediengerätes. Veränderte Geschwindigkeiten und Beschleunigungen beeinflussen jedoch auch die Fahr- und Hubzeiten des Regalbediengerätes. Daher wird ebenfalls eine Berechnungsmethode vorgestellt, mit der der optimale Startpunkt des Hubwerkes und die optimale Fahrgeschwindigkeit auf Basis der eingestellten kinematischen Parameter berechnet werden können.
Resumo:
A 318-metre-long sedimentary profile drilled by the International Continental Scientific Drilling Program (ICDP) at Site 5011-1 in Lake El’gygytgyn, Far East Russian Arctic, has been analysed for its sedimentologic response to global climate modes by chronostratigraphic methods. The 12 km wide lake is sited off-centre in an 18 km large crater that was created by the impact of a meteorite 3.58 Ma ago. Since then sediments have been continuously deposited. For establishing their chronology, major reversals of the earth’s magnetic field provided initial tie points for the age model, confirming that the impact occurred in the earliest geomagnetic Gauss chron. Various stratigraphic parameters, reflecting redox conditions at the lake floor and climatic conditions in the catchment were tuned synchronously to Northern Hemisphere insolation variations and the marine oxygen isotope stack, respectively. Thus, a robust age model comprising more than 600 tie points could be defined. It could be shown that deposition of sediments in Lake El’gygytgyn occurred in concert with global climatic cycles. The upper �160m of sediments represent the past 3.3 Ma, equivalent to sedimentation rates of 4 to 5 cm ka−1, whereas the lower 160m represent just the first 0.3 Ma after the impact, equivalent to sedimentation rates in the order of 45 cm ka−1. This study also provides orbitally tuned ages for a total of 8 tephras deposited in Lake El’gygytgyn.
Resumo:
PURPOSE To determine the image quality of an iterative reconstruction (IR) technique in low-dose MDCT (LDCT) of the chest of immunocompromised patients in an intraindividual comparison to filtered back projection (FBP) and to evaluate the dose reduction capability. MATERIALS AND METHODS 30 chest LDCT scans were performed in immunocompromised patients (Brilliance iCT; 20-40 mAs; mean CTDIvol: 1.7 mGy). The raw data were reconstructed using FBP and the IR technique (iDose4™, Philips, Best, The Netherlands) set to seven iteration levels. 30 routine-dose MDCT (RDCT) reconstructed with FBP served as controls (mean exposure: 116 mAs; mean CDTIvol: 7.6 mGy). Three blinded radiologists scored subjective image quality and lesion conspicuity. Quantitative parameters including CT attenuation and objective image noise (OIN) were determined. RESULTS In LDCT high iDose4™ levels lead to a significant decrease in OIN (FBP vs. iDose7: subscapular muscle 139.4 vs. 40.6 HU). The high iDose4™ levels provided significant improvements in image quality and artifact and noise reduction compared to LDCT FBP images. The conspicuity of subtle lesions was limited in LDCT FBP images. It significantly improved with high iDose4™ levels (> iDose4). LDCT with iDose4™ level 6 was determined to be of equivalent image quality as RDCT with FBP. CONCLUSION iDose4™ substantially improves image quality and lesion conspicuity and reduces noise in low-dose chest CT. Compared to RDCT, high iDose4™ levels provide equivalent image quality in LDCT, hence suggesting a potential dose reduction of almost 80%.
Resumo:
OBJECTIVES Because neural invasion (NI) is still inconsistently reported and not well characterized within gastrointestinal malignancies (GIMs), our aim was to determine the exact prevalence and severity of NI and to elucidate the true impact of NI on patient's prognosis. BACKGROUND The union internationale contre le cancer (UICC) recently added NI as a novel parameter in the current TNM classification. However, there are only a few existing studies with specific focus on NI, so that the distinct role of NI in GIMs is still uncertain. MATERIALS AND METHODS NI was characterized in approximately 16,000 hematoxylin and eosin tissue sections from 2050 patients with adenocarcinoma of the esophagogastric junction (AEG)-I-III, squamous cell carcinoma (SCC) of the esophagus, gastric cancer (GC), colon cancer (CC), rectal cancer (RC), cholangiocellular cancer (CCC), hepatocellular cancer (HCC), and pancreatic cancer (PC). NI prevalence and severity was determined and related to patient's prognosis and survival. RESULTS NI prevalence largely varied between HCC/6%, CC/28%, RC/34%, AEG-I/36% and AEG-II/36%, SCC/37%, GC/38%, CCC/58%, and AEG-III/65% to PC/100%. NI severity score was uppermost in PC (24.9±1.9) and lowest in AEG-I (0.8±0.3). Multivariable analyses including age, sex, TNM stage, and grading revealed that the prevalence of NI was significantly associated with diminished survival in AEG-II/III, GC, and RC. However, increasing NI severity impaired survival in AEG-II/III and PC only. CONCLUSIONS NI prevalence and NI severity strongly vary within GIMs. Determination of NI severity in GIMs is a more precise tool than solely recording the presence of NI and revealed dismal prognostic impact on patients with AEG-II/III and PC. Evidently, NI is not a concomitant side feature in GIMs and, therefore, deserves special attention for improved patient stratification and individualized therapy after surgery.
Resumo:
BACKGROUND AND PURPOSE The posterior circulation Acute Stroke Prognosis Early CT Score (pc-APECTS) applied to CT angiography source images (CTA-SI) predicts the functional outcome of patients in the Basilar Artery International Cooperation Study (BASICS). We assessed the diagnostic and prognostic impact of pc-ASPECTS applied to perfusion CT (CTP) in the BASICS registry population. METHODS We applied pc-ASPECTS to CTA-SI and cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) parameter maps of BASICS patients with CTA and CTP studies performed. Hypoattenuation on CTA-SI, relative reduction in CBV or CBF, or relative increase in MTT were rated as abnormal. RESULTS CTA and CTP were available in 27/592 BASICS patients (4.6%). The proportion of patients with any perfusion abnormality was highest for MTT (93%; 95% confidence interval [CI], 76%-99%), compared with 78% (58%-91%) for CTA-SI and CBF, and 46% (27%-67%) for CBV (P < .001). All 3 patients with a CBV pc-ASPECTS < 8 compared to 6/23 patients with a CBV pc-ASPECTS ≥ 8 had died at 1 month (RR 3.8; 95% CI, 1.9-7.6). CONCLUSION CTP was performed in a minority of the BASICS registry population. Perfusion disturbances in the posterior circulation were most pronounced on MTT parameter maps. CBV pc-ASPECTS < 8 may indicate patients with high case fatality.
Resumo:
The effects of power and time conditions of in situ N2 plasma treatment, prior to silicon nitride (SiN) passivation, were investigated on an AlGaN/GaN high-electron mobility transistor (HEMT). These studies reveal that N2 plasma power is a critical parameter to control the SiN/AlGaN interface quality, which directly affects the 2-D electron gas density. Significant enhancement in the HEMT characteristics was observed by using a low power N2 plasma pretreatment. In contrast, a marked gradual reduction in the maximum drain-source current density (IDS max) and maximum transconductance (gm max), as well as in fT and fmax, was observed as the N2 plasma power increases (up to 40% decrease for 210 W). Different mechanisms were proposed to be dominant as a function of the discharge power range. A good correlation was observed between the device electrical characteristics and the surface assessment by atomic force microscopy and Kelvin force microscopy techniques.
Resumo:
Different parameters are used to quantify the maturity of fruits at or near harvest (shape, color, flesh texture and internal composition). Flesh firmness is a critical handling parameter for fruits such as peach, pear and apple. Results of previous studies conducted by different researchers have shown that impact techniques can be used to evaluate firmness of fruits. A prototype impact system for firmness sorting of fruits was developed by Chen and Ruiz-Altisent (Chen et al, 1996). This sensor was mounted and tested successfully on a 3 m section of a commercial conveyor belt (Chen et al, 1998). This is a further development of the on-line impact system for firmness sorting of fruits. The design of the sensor has been improved and it has been mounted on a experimental fruit packing line (Ortiz-Cañavate et al 1999).
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU) PBL. YSU is a first-order scheme that uses non-local eddy diffusivity coefficients to compute turbulent fluxes. It is based on the MRF, and improves it with an explicit treatment of the entrainment. With the aim of evaluating the RAMS results for these PBL parameterizations, a series of numerical simulations have been performed and contrasted with the results obtained using the Mellor and Yamada (MY) scheme, also widely used, and the standard PBL scheme in the RAMS model. The numerical study carried out here is focused on mesoscale circulation events during the summer, as these meteorological situations dominate this season of the year in the Western Mediterranean coast. In addition, the sensitivity of these PBL parameterizations to the initial soil moisture content is also evaluated. The results show a warmer and moister PBL for the YSU scheme compared to both MRF and MY. The model presents as well a tendency to overestimate the observed temperature and to underestimate the observed humidity, considering all PBL schemes and a low initial soil moisture content. In addition, the bias between the model and the observations is significantly reduced moistening the initial soil moisture of the corresponding run. Thus, varying this parameter has a positive effect and improves the simulated results in relation to the observations. However, there is still a significant overestimation of the wind speed over flatter terrain, independently of the PBL scheme and the initial soil moisture used, even though a different degree of accuracy is reproduced by RAMS taking into account the different sensitivity tests.
Resumo:
A number of researchers have investigated the impact of network architecture on the performance of artificial neural networks. Particular attention has been paid to the impact on the performance of the multi-layer perceptron of architectural issues, and the use of various strategies to attain an optimal network structure. However, there are still perceived limitations with the multi-layer perceptron and networks that employ a different architecture to the multi-layer perceptron have gained in popularity in recent years, particularly, networks that implement a more localised solution, where the solution in one area of the problem space does not impact, or has a minimal impact, on other areas of the space. In this study, we discuss the major architectural issues affecting the performance of a multi-layer perceptron, before moving on to examine in detail the performance of a new localised network, namely the bumptree. The work presented here examines the impact on the performance of artificial neural networks of employing alternative networks to the long established multi-layer perceptron. In particular, networks that impose a solution where the impact of each parameter in the final network architecture has a localised impact on the problem space being modelled are examined. The alternatives examined are the radial basis function and bumptree neural networks, and the impact of architectural issues on the performance of these networks is examined. Particular attention is paid to the bumptree, with new techniques for both developing the bumptree structure and employing this structure to classify patterns being examined.
Resumo:
Sulfonic acid functionalised periodic mesoporous organosilicas (PrSO3 H-PMOs) with tunable hydrophobicity were synthesised via a surfactant-templating route, and characterised by porosimetry, TEM, XRD, XPS, inverse gas chromatography (IGC) and ammonia pulse chemisorption. IGC reveals that incorporation of ethyl or benzyl moieties into a mesoporous SBA-15 silica framework significantly increases the non-specific dispersive surface energy of adsorption for alkane adsorption, while decreasing the free energy of adsorption of methanol, reflecting increased surface hydrophobicity. The non-specific dispersive surface energy of adsorption of PMO-SO3H materials is strongly correlated with their activity towards palmitic acid esterification with methanol, demonstrating the power of IGC as an analytical tool for identifying promising solid acid catalysts for the esterification of free fatty acids. A new parameter [-ΔGCNP-P], defined as the per carbon difference in Gibbs free energy of adsorption between alkane and polar probe molecules, provides a simple predictor of surface hydrophobicity and corresponding catalyst activity in fatty acid esterification. © 2014 Elsevier B.V.
Resumo:
This chapter discusses engineering design and performance of various types of biomass transformation reactors. These reactors vary in their operating principle depending on the processing capacity and the nature of the desired end product, that is, gas, chemicals or liquid bio-oil. Mass balance around a thermal conversion reactor is usually carried out to identify the degree of conversion and obtain the amount of the various components in the product. The energy balance around the reactors is essential for determining the optimum reactor temperature and the amount of heat required to complete the overall reactions. Experimental and pilot-plant testing is essential for proper reactor design. However, it is common practice to use correlation and valid parameter values in determining the realistic reactor dimensions and configurations. Despite the recent progress in thermochemical conversion technology, reactor performance and scale up potential are the subjects of continuing research.