832 resultados para RLT-BASED APPROACH
Resumo:
Most sedimentary modelling programs developed in recent years focus on either terrigenous or carbonate marine sedimentation. Nevertheless, only a few programs have attempted to consider mixed terrigenous-carbonate sedimentation, and most of these are two-dimensional, which is a major restriction since geological processes take place in 3D. This paper presents the basic concepts of a new 3D mathematical forward simulation model for clastic sediments, which was developed from SIMSAFADIM, a previous 3D carbonate sedimentation model. The new extended model, SIMSAFADIM-CLASTIC, simulates processes of autochthonous marine carbonate production and accumulation, together with clastic transport and sedimentation in three dimensions of both carbonate and terrigenous sediments. Other models and modelling strategies may also provide realistic and efficient tools for prediction of stratigraphic architecture and facies distribution of sedimentary deposits. However, SIMSAFADIM-CLASTIC becomes an innovative model that attempts to simulate different sediment types using a process-based approach, therefore being a useful tool for 3D prediction of stratigraphic architecture and facies distribution in sedimentary basins. This model is applied to the neogene Vallès-Penedès half-graben (western Mediterranean, NE Spain) to show the capacity of the program when applied to a realistic geologic situation involving interactions between terrigenous clastics and carbonate sediments.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.
Resumo:
Carbapenemases should be accurately and rapidly detected, given their possible epidemiological spread and their impact on treatment options. Here, we developed a simple, easy and rapid matrix-assisted laser desorption ionization-time of flight (MALDI-TOF)-based assay to detect carbapenemases and compared this innovative test with four other diagnostic approaches on 47 clinical isolates. Tandem mass spectrometry (MS-MS) was also used to determine accurately the amount of antibiotic present in the supernatant after 1 h of incubation and both MALDI-TOF and MS-MS approaches exhibited a 100% sensitivity and a 100% specificity. By comparison, molecular genetic techniques (Check-MDR Carba PCR and Check-MDR CT103 microarray) showed a 90.5% sensitivity and a 100% specificity, as two strains of Aeromonas were not detected because their chromosomal carbapenemase is not targeted by probes used in both kits. Altogether, this innovative MALDI-TOF-based approach that uses a stable 10-μg disk of ertapenem was highly efficient in detecting carbapenemase, with a sensitivity higher than that of PCR and microarray.
Resumo:
Most sedimentary modelling programs developed in recent years focus on either terrigenous or carbonate marine sedimentation. Nevertheless, only a few programs have attempted to consider mixed terrigenous-carbonate sedimentation, and most of these are two-dimensional, which is a major restriction since geological processes take place in 3D. This paper presents the basic concepts of a new 3D mathematical forward simulation model for clastic sediments, which was developed from SIMSAFADIM, a previous 3D carbonate sedimentation model. The new extended model, SIMSAFADIM-CLASTIC, simulates processes of autochthonous marine carbonate production and accumulation, together with clastic transport and sedimentation in three dimensions of both carbonate and terrigenous sediments. Other models and modelling strategies may also provide realistic and efficient tools for prediction of stratigraphic architecture and facies distribution of sedimentary deposits. However, SIMSAFADIM-CLASTIC becomes an innovative model that attempts to simulate different sediment types using a process-based approach, therefore being a useful tool for 3D prediction of stratigraphic architecture and facies distribution in sedimentary basins. This model is applied to the neogene Vallès-Penedès half-graben (western Mediterranean, NE Spain) to show the capacity of the program when applied to a realistic geologic situation involving interactions between terrigenous clastics and carbonate sediments.
Resumo:
Abstract In this thesis we present the design of a systematic integrated computer-based approach for detecting potential disruptions from an industry perspective. Following the design science paradigm, we iteratively develop several multi-actor multi-criteria artifacts dedicated to environment scanning. The contributions of this thesis are both theoretical and practical. We demonstrate the successful use of multi-criteria decision-making methods for technology foresight. Furthermore, we illustrate the design of our artifacts using build and-evaluate loops supported with a field study of the Swiss mobile payment industry. To increase the relevance of this study, we systematically interview key Swiss experts for each design iteration. As a result, our research provides a realistic picture of the current situation in the Swiss mobile payment market and reveals previously undiscovered weak signals for future trends. Finally, we suggest a generic design process for environment scanning.
Resumo:
The main environmental variables determining the community structure and the functioning of Mediterranean shallow lentic ecosystems are described. These ecosystems are characterized by the unpredictability of their water inputs and the high variability in their water level and physical and chemical composition. Variations in flooding, salinity, and water turnover are determinant in species composition and nutrient dynamics. Taxon-based and size-based approaches to the study of the community structure of aquatic organisms that colonise these ecosystems are also compared. The conventional taxonomic approach, based on the determination of species composition, has been used for the identification of patterns in species richness, distribution and temporal dynamics, and for ecological requirements of species and their potential use as ecological indicators. This taxonbased approach has been compared with a size-based approach, where individuals are classified by their size. Size-based approach gives complementary information about community structure and dynamics, especially when communities are dominated by a single species. The use of size diversity combined with species diversity is suggested for a more complete understanding of community structuring in this type of ecosystem. Detailed examples of two Mediterranean shallow lentic ecosystems, the salt marshes of the Empordà wetlands and the Espolla temporary karstic pond, which differ in hydrology and water origin, are used to discuss the suitability of these different approaches
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
Despite major progress in T lymphocyte analysis in melanoma patients, TCR repertoire selection and kinetics in response to tumor Ags remain largely unexplored. In this study, using a novel ex vivo molecular-based approach at the single-cell level, we identified a single, naturally primed T cell clone that dominated the human CD8(+) T cell response to the Melan-A/MART-1 Ag. The dominant clone expressed a high-avidity TCR to cognate tumor Ag, efficiently killed tumor cells, and prevailed in the differentiated effector-memory T lymphocyte compartment. TCR sequencing also revealed that this particular clone arose at least 1 year before vaccination, displayed long-term persistence, and efficient homing to metastases. Remarkably, during concomitant vaccination over 3.5 years, the frequency of the pre-existing clone progressively increased, reaching up to 2.5% of the circulating CD8 pool while its effector functions were enhanced. In parallel, the disease stabilized, but subsequently progressed with loss of Melan-A expression by melanoma cells. Collectively, combined ex vivo analysis of T cell differentiation and clonality revealed for the first time a strong expansion of a tumor Ag-specific human T cell clone, comparable to protective virus-specific T cells. The observed successful boosting by peptide vaccination support further development of immunotherapy by including strategies to overcome immune escape.
Resumo:
The institutional regimes framework has previously been applied to the institutional conditions that support or hinder the sustainability of housing stocks. This resource-based approach identifies the actors across different sectors that have an interest in housing, how they use housing, the mechanisms affecting their use (public policy, use rights, contracts, etc.) and the effects of their uses on the sustainability of housing within the context of the built environment. The potential of the institutional regimes framework is explored for its suitability to the many considerations of housing resilience. By identifying all the goods and services offered by the resource 'housing stock', researchers and decision-makers could improve the resilience of housing by better accounting for the ecosystem services used by housing, decreasing the vulnerability of housing to disturbances, and maximizing recovery and reorganization following a disturbance. The institutional regimes framework is found to be a promising tool for addressing housing resilience. Further questions are raised for translating this conceptual framework into a practical application underpinned with empirical data.
Resumo:
PURPOSE: To describe the anatomical characteristics and patterns of neurovascular compression in patients suffering classic trigeminal neuralgia (CTN), using high-resolution magnetic resonance imaging (MRI). MATERIALS AND METHODS: The analysis of the anatomy of the trigeminal nerve, brain stem and the vascular structures related to this nerve was made in 100 consecutive patients treated with a Gamma Knife radiosurgery for CTN between December 1999 and September 2004. MRI studies (T1, T1 enhanced and T2-SPIR) with axial, coronal and sagital simultaneous visualization were dynamically assessed using the software GammaPlan?. Three-dimensional reconstructions were also developed in some representative cases. RESULTS: In 93 patients (93%), there were one or several vascular structures in contact, either, with the trigeminal nerve, or close to its origin in the pons. The superior cerebellar artery was involved in 71 cases (76%). Other vessels identified were the antero-inferior cerebellar artery, the basilar artery, the vertebral artery, and some venous structures. Vascular compression was found anywhere along the trigeminal nerve. The mean distance between the nerve compression and the origin of the nerve in the brainstem was 3.76±2.9mm (range 0-9.8mm). In 39 patients (42%), the vascular compression was located proximally and in 42 (45%) the compression was located distally. Nerve dislocation or distortion by the vessel was observed in 30 cases (32%). CONCLUSIONS: The findings of this study are similar to those reported in surgical and autopsy series. This non-invasive MRI-based approach could be useful for diagnostic and therapeutic decisions in CTN, and it could help to understand its pathogenesis.
Resumo:
Diplomityössä on käsitelty uudenlaisia menetelmiä riippumattomien komponenttien analyysiin(ICA): Menetelmät perustuvat colligaatioon ja cross-momenttiin. Colligaatio menetelmä perustuu painojen colligaatioon. Menetelmässä on käytetty kahden tyyppisiä todennäköisyysjakaumia yhden sijasta joka perustuu yleiseen itsenäisyyden kriteeriin. Työssä on käytetty colligaatio lähestymistapaa kahdella asymptoottisella esityksellä. Gram-Charlie ja Edgeworth laajennuksia käytetty arvioimaan todennäköisyyksiä näissä menetelmissä. Työssä on myös käytetty cross-momentti menetelmää joka perustuu neljännen asteen cross-momenttiin. Menetelmä on hyvin samankaltainen FastICA algoritmin kanssa. Molempia menetelmiä on tarkasteltu lineaarisella kahden itsenäisen muuttajan sekoituksella. Lähtö signaalit ja sekoitetut matriisit ovattuntemattomia signaali lähteiden määrää lukuunottamatta. Työssä on vertailtu colligaatio menetelmään ja sen modifikaatioita FastICA:an ja JADE:en. Työssä on myös tehty vertailu analyysi suorituskyvyn ja keskusprosessori ajan suhteen cross-momenttiin perustuvien menetelmien, FastICA:n ja JADE:n useiden sekoitettujen parien kanssa.
Resumo:
Adoptive cell transfer using engineered T cells is emerging as a promising treatment for metastatic melanoma. Such an approach allows one to introduce T cell receptor (TCR) modifications that, while maintaining the specificity for the targeted antigen, can enhance the binding and kinetic parameters for the interaction with peptides (p) bound to major histocompatibility complexes (MHC). Using the well-characterized 2C TCR/SIYR/H-2K(b) structure as a model system, we demonstrated that a binding free energy decomposition based on the MM-GBSA approach provides a detailed and reliable description of the TCR/pMHC interactions at the structural and thermodynamic levels. Starting from this result, we developed a new structure-based approach, to rationally design new TCR sequences, and applied it to the BC1 TCR targeting the HLA-A2 restricted NY-ESO-1157-165 cancer-testis epitope. Fifty-four percent of the designed sequence replacements exhibited improved pMHC binding as compared to the native TCR, with up to 150-fold increase in affinity, while preserving specificity. Genetically engineered CD8(+) T cells expressing these modified TCRs showed an improved functional activity compared to those expressing BC1 TCR. We measured maximum levels of activities for TCRs within the upper limit of natural affinity, K D = ∼1 - 5 μM. Beyond the affinity threshold at K D < 1 μM we observed an attenuation in cellular function, in line with the "half-life" model of T cell activation. Our computer-aided protein-engineering approach requires the 3D-structure of the TCR-pMHC complex of interest, which can be obtained from X-ray crystallography. We have also developed a homology modeling-based approach, TCRep 3D, to obtain accurate structural models of any TCR-pMHC complexes when experimental data is not available. Since the accuracy of the models depends on the prediction of the TCR orientation over pMHC, we have complemented the approach with a simplified rigid method to predict this orientation and successfully assessed it using all non-redundant TCR-pMHC crystal structures available. These methods potentially extend the use of our TCR engineering method to entire TCR repertoires for which no X-ray structure is available. We have also performed a steered molecular dynamics study of the unbinding of the TCR-pMHC complex to get a better understanding of how TCRs interact with pMHCs. This entire rational TCR design pipeline is now being used to produce rationally optimized TCRs for adoptive cell therapies of stage IV melanoma.
Resumo:
The political environment of security and defence has changed radically in the Western industrialised world since the Cold War. As a response to these changes, since the beginning of the twenty-first century, most Western countries have adopted a ‘capabilities-based approach’ to developing and operating their armed forces. More responsive and versatile military capabilities must be developed to meet the contemporary challenges. The systems approach is seen as a beneficial means of overcoming traps in resolving complex real -world issues by conventional thinking. The main objectives of this dissertation are to explore and assess the means to enhance the development of military capabilities both in concept development and experimentation (CD&E) and in national defence materiel collaboration issues. This research provides a unique perspective, a systems approach, to the development areas of concern in resolving complex real-world issues. This dissertation seeks to increase the understanding of the military capability concept both as a whole and with in its life cycle. The dissertation follows the generic functionalist systems methodology by Jackson. The methodology applies a comprehensive set of constitutive rules to examine the research objectives. This dissertation makes contribution to current studies about military capability. It presents two interdepen dent conceptual capability models: the comprehensive capability meta-model (CCMM) and the holistic capability life cycle model (HCLCM). These models holistically and systematically complement the existing, but still evolving, understanding of military capability and its life cycle. In addition, this dissertation contributes to the scientific discussion of defence procurement in its broad meaning by introducing the holistic model about the national defence materiel collaboration between the defence forces, defence industry and academia. The model connects the key collaborative mechanisms, which currently work in isolation from each other, and take into consideration the unique needs of each partner. This dissertation contributes empirical evidence regarding the benefits of enterprise architectures (EA) to CD&E. The EA approach may add value to traditional concept development by increasing the clarity, consistency and completeness of the concept. The most important use considered for EA in CD&E is that it enables further utilisation of the concept created in the case project.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Organizational creativity is increasingly important for organizations aiming to survive and thrive in complex and unexpectedly changing environments. It is precondition of innovation and a driver of an organization’s performance success. Whereas innovation research increasingly promotes high-involvement and participatory innovation, the models of organizational creativity are still mainly based on an individual-creativity view. Likewise, the definitions of organizational creativity and innovation are somewhat equal, and they are used as interchangeable constructs, while on the other hand they are seen as different constructs. Creativity is seen as generation of novel and useful ideas, whereas innovation is seen as the implementation of these ideas. The research streams of innovation and organizational creativity seem to be advancing somewhat separately, although together they could provide many synergy advantages. Thereby, this study addresses three main research gaps. First, as the knowledge and knowing is being increasingly expertized and distributed in organizations, the conceptualization of organizational creativity needs to face that perspective, rather than relying on the individual-creativity view. Thus, the conceptualization of organizational creativity needs clarification, especially as an organizational-level phenomenon (i.e., creativity by an organization). Second, approaches to consciously build organizational creativity to increase the capacity of an organization to demonstrate novelty in its knowledgeable actions are rare. The current creativity techniques are mainly based on individual-creativity views, and they mainly focus on the occasional problem-solving cases among a limited number of individuals, whereas, the development of collective creativity and creativity by the organization lacks approaches. Third, in terms of organizational creativity as a collective phenomenon, the engagement, contributions, and participation of organizational members into activities of common meaning creation are more important than the individualcreativity skills. Therefore, the development approaches to foster creativity as social, emerging, embodied, and collective creativity are needed to complement the current creativity techniques. To address these gaps, the study takes a multiparadigm perspective to face the following three objectives. The first objective of this study is to clarify and extend the conceptualization of organizational creativity. The second is to study the development of organizational creativity. The third is to explore how an improvisational theater based approach fosters organizational creativity. The study consists of two parts comprising the introductory part (part I) and six publications (part II). Each publication addresses the research questions of the thesis through detailed subquestions. The study makes three main contributions to the research of organizational creativity. First, it contributes toward the conceptualization of organizational creativity by extending the current view of organizational creativity. This study views organizational creativity as a multilevel construct constituting both of individual and collective (group and organizational) creativity. In contrast to current views of organizational creativity, this study bases on organizational (collective) knowledge that is based on and demonstrated through the knowledgeable actions of an organization as a whole. The study defines organizational creativity as an overall ability of an organization to demonstrate novelty in its knowledgeable actions (through what it does and how it does what it does).Second, this study contributes toward the development of organizational creativity as multi-level phenomena, introducing developmental approaches that face two or more of these levels simultaneously. More specifically, the study presents the cross-level approaches to building organizational creativity, by using an approach based in improvisational theater and considering assessment of organizational renewal capability. Third, the study contributes on development of organizational creativity using an improvisational theater based approach as twofold meaning. First, it fosters individual and collective creativity simultaneously and builds space for creativity to occur. Second, it models collective and distributed creativity processes, thereby, contributing to the conceptualization of organizational creativity.