38 resultados para diagnose
Resumo:
Recent work has shown that both the amplitude of upper-level Rossby waves and the tropopause sharpness decrease with forecast lead time for several days in some operational weather forecast systems. In this contribution, the evolution of error growth in a case study of this forecast error type is diagnosed through analysis of operational forecasts and hindcast simulations. Potential vorticity (PV) on the 320-K isentropic surface is used to diagnose Rossby waves. The Rossby-wave forecast error in the operational ECMWF high-resolution forecast is shown to be associated with errors in the forecast of a warm conveyor belt (WCB) through trajectory analysis and an error metric for WCB outflows. The WCB forecast error is characterised by an overestimation of WCB amplitude, a location of the WCB outflow regions that is too far to the southeast, and a resulting underestimation of the magnitude of the negative PV anomaly in the outflow. Essentially the same forecast error development also occurred in all members of the ECMWF Ensemble Prediction System and the Met Office MOGREPS-15 suggesting that in this case model error made an important contribution to the development of forecast error in addition to initial condition error. Exploiting this forecast error robustness, a comparison was performed between the realised flow evolution, proxied by a sequence of short-range simulations, and a contemporaneous forecast. Both the proxy to the realised flow and the contemporaneous forecast a were produced with the Met Office Unified Model enhanced with tracers of diabatic processes modifying potential temperature and PV. Clear differences were found in the way potential temperature and PV are modified in the WCB between proxy and forecast. These results demonstrate that differences in potential temperature and PV modification in the WCB can be responsible for forecast errors in Rossby waves.
Resumo:
Observations have been obtained within an intense (precipitation rates > 50 mm h−1 ) narrow cold-frontal rainband (NCFR) embedded within a broader region of stratiform precipitation. In situ data were obtained from an aircraft which flew near a steerable dual-polarisation Doppler radar. The observations were obtained to characterise the microphysical properties of cold frontal clouds, with an emphasis on ice and precipitation formation and development. Primary ice nucleation near cloud top (−55◦ C) appeared to be enhanced by convective features. However, ice multiplication led to the largest ice particle number concentrations being observed at relatively high temperatures (> −10◦ C). The multiplication process (most likely rime splintering) occurs when stratiform precipitation interacts with supercooled water generated in the NCFR. Graupel was notably absent in the data obtained. Ice multiplication processes are known to have a strong impact in glaciating isolated convective clouds, but have rarely been studied within larger organised convective systems such as NCFRs. Secondary ice particles will impact on precipitation formation and cloud dynamics due to their relatively small size and high number density. Further modelling studies are required to quantify the effects of rime splintering on precipitation and dynamics in frontal rainbands. Available parametrizations used to diagnose the particle size distributions do not account for the influence of ice multiplication. This deficiency in parametrizations is likely to be important in some cases for modelling the evolution of cloud systems and the precipitation formation. Ice multiplication has significant impact on artefact removal from in situ particle imaging probes.
Resumo:
Prior literature showed that Felder and Silverman learning styles model (FSLSM) was widely adopted to cater to individual styles of learners whether in traditional or Technology Enhanced Learning (TEL). In order to infer this model, the Index of Learning Styles (ILS) instrument was proposed. This research aims to analyse the soundness of this instrument in an Arabic sample. Data were integrated from different courses and years. A total of 259 engineering students participated voluntarily in the study. The reliability was analysed by applying internal construct reliability, inter-scale correlation, and total item correlation. The construct validity was also considered by running factor analysis. The overall results indicated that the reliability and validity of perception and input dimensions were moderately supported, whereas processing and understanding dimensions showed low internal-construct consistency and their items were weakly loaded in the associated constructs. Generally, the instrument needs further effort to improve its soundness. However, considering the consistency of the produced results of engineering students irrespective of cross-cultural differences, it can be adopted to diagnose learning styles.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic, frictionless governing equations for fluid dynamics, with tendencies arising from the parametrization of other physical processes. Since potential vorticity (PV) is conserved following fluid flow in adiabatic, frictionless circumstances, it is possible to isolate the effects of non-conservative processes by accumulating PV changes in an air-mass relative framework. This “PV tracer technique” is used to accumulate separately the effects on PV of each of the different non-conservative processes represented in a numerical model of the atmosphere. Dynamical cores are not exactly conservative because they introduce, explicitly or implicitly, some level of dissipation and adjustment of prognostic model variables which acts to modify PV. Here, the PV tracers technique is extended to diagnose the cumulative effect of the non-conservation of PV by a dynamical core and its characteristics relative to the PV modification by parametrized physical processes. Quantification using the Met Office Unified Model reveals that the magnitude of the non-conservation of PV by the dynamical core is comparable to those from physical processes. Moreover, the residual of the PV budget, when tracing the effects of the dynamical core and physical processes, is at least an order of magnitude smaller than the PV tracers associated with the most active physical processes. The implication of this work is that the non-conservation of PV by a dynamical core can be assessed in case studies with a full suite of physics parametrizations and directly compared with the PV modification by parametrized physical processes. The nonconservation of PV by the dynamical core is shown to move the position of the extratropical tropopause while the parametrized physical processes have a lesser effect at the tropopause level.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
Human induced land-use change (LUC) alters the biogeophysical characteristics of the land surface influencing the surface energy balance. The level of atmospheric CO2 is expected to increase in the coming century and beyond, modifying temperature and precipitation patterns and altering the distribution and physiology of natural vegetation. It is important to constrain how CO2-induced climate and vegetation change may influence the regional extent to which LUC alters climate. This sensitivity study uses the HadCM3 coupled climate model under a range of equilibrium forcings to show that the impact of LUC declines under increasing atmospheric CO2, specifically in temperate and boreal regions. A surface energy balance analysis is used to diagnose how these changes occur. In Northern Hemisphere winter this pattern is attributed in part to the decline in winter snow cover and in the summer due to a reduction in latent cooling with higher levels of CO2. The CO2-induced change in natural vegetation distribution is also shown to play a significant role. Simulations run at elevated CO2 yet present day vegetation show a significantly increased sensitivity to LUC, driven in part by an increase in latent cooling. This study shows that modelling the impact of LUC needs to accurately simulate CO2 driven changes in precipitation and snowfall, and incorporate accurate, dynamic vegetation distribution.
Resumo:
Network diagnosis in Wireless Sensor Networks (WSNs) is a difficult task due to their improvisational nature, invisibility of internal running status, and particularly since the network structure can frequently change due to link failure. To solve this problem, we propose a Mobile Sink (MS) based distributed fault diagnosis algorithm for WSNs. An MS, or mobile fault detector is usually a mobile robot or vehicle equipped with a wireless transceiver that performs the task of a mobile base station while also diagnosing the hardware and software status of deployed network sensors. Our MS mobile fault detector moves through the network area polling each static sensor node to diagnose the hardware and software status of nearby sensor nodes using only single hop communication. Therefore, the fault detection accuracy and functionality of the network is significantly increased. In order to maintain an excellent Quality of Service (QoS), we employ an optimal fault diagnosis tour planning algorithm. In addition to saving energy and time, the tour planning algorithm excludes faulty sensor nodes from the next diagnosis tour. We demonstrate the effectiveness of the proposed algorithms through simulation and real life experimental results.