911 resultados para CIDOC Conceptual Reference Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research hypothesis of the thesis is that “an open participation in the co-creation of the services and environments, makes life easier for vulnerable groups”; assuming that the participatory and emancipatory approaches are processes of possible actions and changes aimed at facilitating people’s lives. The adoption of these approaches is put forward as the common denominator of social innovative practices that supporting inclusive processes allow a shift from a medical model to a civil and human rights approach to disability. The theoretical basis of this assumption finds support in many principles of Inclusive Education and the main focus of the hypothesis of research is on participation and emancipation as approaches aimed at facing emerging and existing problems related to inclusion. The framework of reference for the research is represented by the perspectives adopted by several international documents concerning policies and interventions to promote and support the leadership and participation of vulnerable groups. In the first part an in-depth analysis of the main academic publications on the central themes of the thesis has been carried out. After investigating the framework of reference, the analysis focuses on the main tools of participatory and emancipatory approaches, which are able to connect with the concepts of active citizenship and social innovation. In the second part two case studies concerning participatory and emancipatory approaches in the areas of concern are presented and analyzed as example of the improvement of inclusion, through the involvement and participation of persons with disability. The research has been developed using a holistic and interdisciplinary approach, aimed at providing a knowledge-base that fosters a shift from a situation of passivity and care towards a new scenario based on the person’s commitment in the elaboration of his/her own project of life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our growing understanding of human mind and cognition and the development of neurotechnology has triggered debate around cognitive enhancement in neuroethics. The dissertation examines the normative issues of memory enhancement, and focuses on two issues: (1) the distinction between memory treatment and enhancement; and (2) how the issue of authenticity concerns memory interventions, including memory treatments and enhancements. rnThe first part consists of a conceptual analysis of the concepts required for normative considerations. First, the representational nature and the function of memory are discussed. Memory is regarded as a special form of self-representation resulting from a constructive processes. Next, the concepts of selfhood, personhood, and identity are examined and a conceptual tool—the autobiographical self-model (ASM)—is introduced. An ASM is a collection of mental representations of the system’s relations with its past and potential future states. Third, the debate between objectivist and constructivist views of health are considered. I argue for a phenomenological account of health, which is based on the primacy of illness and negative utilitarianism.rnThe second part presents a synthesis of the relevant normative issues based on the conceptual tools developed. I argue that memory enhancement can be distinguished from memory treatment using a demarcation regarding the existence of memory-related suffering. That is, memory enhancements are, under standard circumstances and without any unwilling suffering or potential suffering resulting from the alteration of memory functions, interventions that aim to manipulate memory function based on the self-interests of the individual. I then consider the issue of authenticity, namely whether memory intervention or enhancement endangers “one’s true self”. By analyzing two conceptions of authenticity—authenticity as self-discovery and authenticity as self-creation, I propose that authenticity should be understood in terms of the satisfaction of the functional constraints of an ASM—synchronic coherence, diachronic coherence, and global veridicality. This framework provides clearer criteria for considering the relevant concerns and allows us to examine the moral values of authenticity. rn

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discusses the cooperative effort between librarians and science faculty at Bucknell University in developing an effective library use education course for incoming undergraduate science and engineering students. Describes course structure and activities, and includes a library instruction bibliography. (five references) (EA)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reviews the psychophysiological and brain imaging literature on emotional brain function from a methodological point of view. The difficulties in defining, operationalising and measuring emotional activation and, in particular, aversive learning will be considered. Emotion is a response of the organism during an episode of major significance and involves physiological activation, motivational, perceptual, evaluative and learning processes, motor expression, action tendencies and monitoring/subjective feelings. Despite the advances in assessing the physiological correlates of emotional perception and learning processes, a critical appraisal shows that functional neuroimaging approaches encounter methodological difficulties regarding measurement precision (e.g., response scaling and reproducibility) and validity (e.g., response specificity, generalisation to other paradigms, subjects or settings). Since emotional processes are not only the result of localised but also of widely distributed activation, a more representative model of assessment is needed that systematically relates the hierarchy of high- and low-level emotion constructs with the corresponding patterns of activity and functional connectivity of the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We showed that when CA3 pyramidal neurons in the caudal 80% of the dorsal hippocampus had almost disappeared completely, the efferent pathway of CA3 was rarely detectable. We used the mouse pilocarpine model of temporal lobe epilepsy (TLE), and injected iontophoretically the anterograde tracer phaseolus vulgaris leucoagglutinin (PHA-L) into gliotic CA3, medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei, or the retrograde tracer cholera toxin B subunit (CTB) into gliotic CA3 area of hippocampus. In the afferent pathway, the number of neurons projecting to CA3 from medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei increased significantly. In the hippocampus, where CA3 pyramidal neurons were partially lost, calbindin, calretinin, parvalbumin immunopositive back-projection neurons from CA1-CA3 area were observed. Sprouting of Schaffer collaterals with increased number of large boutons in both sides of CA1 area, particularly in the stratum pyramidale, was found. When CA3 pyramidal neurons in caudal 80% of the dorsal hippocampus have almost disappeared completely, surviving CA3 neurons in the rostral 20% of the dorsal hippocampus may play an important role in transmitting hyperactivity of granule cells to surviving CA1 neurons or to dorsal part of the lateral septum. We concluded that reorganization of CA3 area with its downstream or upstream nuclei may be involved in the occurrence of epilepsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximately 90% of fine aerosol in the Midwestern United States has a regional component with a sizable fraction attributed to secondary production of organic aerosol (SOA). The Ozark Forest is an important source of biogenic SOA precursors like isoprene (> 150 mg m-2 d-1), monoterpenes (10-40 mg m-2 d-1), and sesquiterpenes (10-40 mg m-2d-1). Anthropogenic sources include secondary sulfate and nitrate and biomass burning (51-60%), vehicle emissions (17-26%), and industrial emissions (16-18%). Vehicle emissions are an important source of volatile and vapor-phase, semivolatile aliphatic and aromatic hydrocarbons that are important anthropogenic sources of SOA precursors. The short lifetime of SOA precursors and the complex mixture of functionalized oxidation products make rapid sampling, quantitative processing methods, and comprehensive organic molecular analysis essential elements of a comprehensive strategy to advance understanding of SOA formation pathways. Uncertainties in forecasting SOA production on regional scales are large and related to uncertainties in biogenic emission inventories and measurement of SOA yields under ambient conditions. This work presents a bottom-up approach to develop a conifer emission inventory based on foliar and cortical oleoresin composition, development of a model to estimate terpene and terpenoid signatures of foliar and bole emissions from conifers, development of processing and analytic techniques for comprehensive organic molecular characterization of SOA precursors and oxidation products, implementation of the high-volume sampling technique to measure OA and vapor-phase organic matter, and results from a 5 day field experiment conducted to evaluate temporal and diurnal trends in SOA precursors and oxidation products. A total of 98, 115, and 87 terpene and terpenoid species were identified and quantified in commercially available essential oils of Pinus sylvestris, Picea mariana, and Thuja occidentalis, respectively, by comprehensive, two-dimensional gas chromatography with time-of-flight mass spectrometric detection (GC × GC-ToF-MS). Analysis of the literature showed that cortical oleoresin composition was similar to foliar composition of the oldest branches. Our proposed conceptual model for estimation of signatures of terpene and terpenoid emissions from foliar and cortical oleoresin showed that emission potentials of the foliar and bole release pathways are dissimilar and should be considered for conifer species that develop resin blisters or are infested with herbivores or pathogens. Average derivatization efficiencies for Methods 1 and 2 were 87.9 and 114%, respectively. Despite the lower average derivatization efficiency of Method 1, distinct advantages included a greater certainty of derivatization yield for the entire suite of multi- and poly-functional species and fewer processing steps for sequential derivatization. Detection limits for Method 1 using GC × GC- ToF-MS were 0.09-1.89 ng μL-1. A theoretical retention index diagram was developed for a hypothetical GC × 2GC analysis of the complex mixture of SOA precursors and derivatized oxidation products. In general, species eluted (relative to the alkyl diester reference compounds) from the primary column (DB-210) in bands according to n and from the secondary columns (BPX90, SolGel-WAX) according to functionality, essentially making the GC × 2GC retention diagram a Carbon number-functionality grid. The species clustered into 35 groups by functionality and species within each group exhibited good separation by n. Average recoveries of n-alkanes and polyaromatic hydrocarbons (PAHs) by Soxhlet extraction of XAD-2 resin with dichloromethane were 80.1 ± 16.1 and 76.1 ± 17.5%, respectively. Vehicle emissions were the common source for HSVOCs [i.e., resolved alkanes, the unresolved complex mixture (UCM), alkylbenzenes, and 2- and 3-ring PAHs]. An absence of monoterpenes at 0600-1000 and high concentrations of monoterpenoids during the same period was indicative of substantial losses of monoterpenes overnight and the early morning hours. Post-collection, comprehensive organic molecular characterization of SOA precursors and products by GC × GC-ToFMS in ambient air collected with ~2 hr resolution is a promising method for determining biogenic and anthropogenic SOA yields that can be used to evaluate SOA formation models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Reperfusion injury is the leading cause of early graft dysfunction after lung transplantation. Activation of neutrophilic granulocytes with generation of free oxygen radicals appears to play a key role in this process. The efficacy of ascorbic acid as an antioxidant in the amelioration of reperfusion injury after lung transplantation has not been studied yet. METHODS: An in situ autotransplantation model in sheep is presented. The left lung was flushed (Euro-Collins solution) and reperfused; after 2 hours of cold storage, the right hilus was then clamped (group R [reference], n = 6). Group AA animals (n = 6) were treated with 1 g/kg ascorbic acid before reperfusion. Controls (group C, n = 6) underwent hilar preparation and instrumentation only. RESULTS: In group R, arterio-alveolar oxygen difference (AaDO2) and pulmonary vascular resistance (PVR) were significantly elevated after reperfusion. Five of 6 animals developed frank alveolar edema. All biochemical parameters showed significant PMN activation. In group AA, AaDO2, PVR, work of breathing, and the level of PMN activation were significantly lower. CONCLUSIONS: The experimental model reproduces all aspects of lung reperfusion injury reliably. Ascorbic acid was able to weaken reperfusion injury in this experimental setup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The IDA model of cognition is a fully integrated artificial cognitive system reaching across the full spectrum of cognition, from low-level perception/action to high-level reasoning. Extensively based on empirical data, it accurately reflects the full range of cognitive processes found in natural cognitive systems. As a source of plausible explanations for very many cognitive processes, the IDA model provides an ideal tool to think with about how minds work. This online tutorial offers a reasonably full account of the IDA conceptual model, including background material. It also provides a high-level account of the underlying computational “mechanisms of mind” that constitute the IDA computational model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the impact of stratospheric volcanic aerosols on the diurnal temperature range (DTR) over Europe using long-term subdaily station records. We compare the results with a 28-member ensemble of European Centre/Hamburg version 5.4 (ECHAM5.4) general circulation model simulations. Eight stratospheric volcanic eruptions during the instrumental period are investigated. Seasonal all- and clear-sky DTR anomalies are compared with contemporary (approximately 20 year) reference periods. Clear sky is used to eliminate cloud effects and better estimate the signal from the direct radiative forcing of the volcanic aerosols. We do not find a consistent effect of stratospheric aerosols on all-sky DTR. For clear skies, we find average DTR anomalies of −0.08°C (−0.13°C) in the observations (in the model), with the largest effect in the second winter after the eruption. Although the clear-sky DTR anomalies from different stations, volcanic eruptions, and seasons show heterogeneous signals in terms of order of magnitude and sign, the significantly negative DTR anomalies (e.g., after the Tambora eruption) are qualitatively consistent with other studies. Referencing with clear-sky DTR anomalies to the radiative forcing from stratospheric volcanic eruptions, we find the resulting sensitivity to be of the same order of magnitude as previously published estimates for tropospheric aerosols during the so-called “global dimming” period (i.e., 1950s to 1980s). Analyzing cloud cover changes after volcanic eruptions reveals an increase in clear-sky days in both data sets. Quantifying the impact of stratospheric volcanic eruptions on clear-sky DTR over Europe provides valuable information for the study of the radiative effect of stratospheric aerosols and for geo-engineering purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.