19 resultados para second order blind source separation
em Université de Lausanne, Switzerland
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Background: Glutathione (GSH) dysregulation at the gene, protein and functional levels observed in schizophrenia patients, and schizophrenia-like anomalies in GSH deficit experimental models, suggest that genetic glutathione synthesis impairments represent one major risk factor for the disease (Do et al., 2009). In a randomized, double blind, placebo controlled, add-on clinical trial of 140 patients, the GSH precursor N-Acetyl-Cysteine (NAC, 2 g/day, 6 months) significantly improved the negative symptoms and reduced side-effects due to antipsychotics (Berk et al., 2008). In a subset of patients (n=7), NAC (2 g/day, 2 months, cross-over design) also improved auditory evoked potentials, the NMDAdependent mismatch negativity (Lavoie et al, 2008). Methods: To determine whether increased GSH levels would modulate the topography of functional brain connectivity, we applied a multivariate phase synchronization (MPS) estimator (Knyazeva et al, 2008) to dense-array EEGs recorded during rest with eyes closed at the protocol onset, the point of crossover, and at its end. Phase synchronization phenomena are appealing because they can be associated to synchronized phases while the amplitudes stay uncorrelated. MPS measures the degree of interactions among the recorded neuronal oscillators by quantifiying to what extent they behave like a macro-oscillator (i.e. the oscillators are phase synchronous). To assess the whole-head synchronization topography, we computed the MPS sensor-wise over the cluster of locations defined by the sensor itself and he surrounding ones belonging to its second-order neighborhood (Carmeli et al, 2005). Such a cluster spans about 12 cm on average. Abstracts 245 Results: The whole-head imaging revealed a specific synchronization landscape in NAC compared to placebo condition. In particular, NAC increased MPS over frontal and left temporal regions in a frequency-specific manner. Importantly, the topography and direction of MPS changes were similar and robust in all 7 patients. Moreover, these changes correlated with the changes in the Liddle's score of disorganization (Liddle, 1987) thus linking EEG synchronization to the improvement of clinical picture. Discussion: The data suggest an important pathway towards new therapeutic strategies that target GSH dysregulation in schizophrenia. They also show the utility of MPS mapping as a marker of treatment efficacy.
Resumo:
Single-trial analysis of human electroencephalography (EEG) has been recently proposed for better understanding the contribution of individual subjects to a group-analysis effect as well as for investigating single-subject mechanisms. Independent Component Analysis (ICA) has been repeatedly applied to concatenated single-trial responses and at a single-subject level in order to extract those components that resemble activities of interest. More recently we have proposed a single-trial method based on topographic maps that determines which voltage configurations are reliably observed at the event-related potential (ERP) level taking advantage of repetitions across trials. Here, we investigated the correspondence between the maps obtained by ICA versus the topographies that we obtained by the single-trial clustering algorithm that best explained the variance of the ERP. To do this, we used exemplar data provided from the EEGLAB website that are based on a dataset from a visual target detection task. We show there to be robust correspondence both at the level of the activation time courses and at the level of voltage configurations of a subset of relevant maps. We additionally show the estimated inverse solution (based on low-resolution electromagnetic tomography) of two corresponding maps occurring at approximately 300 ms post-stimulus onset, as estimated by the two aforementioned approaches. The spatial distribution of the estimated sources significantly correlated and had in common a right parietal activation within Brodmann's Area (BA) 40. Despite their differences in terms of theoretical bases, the consistency between the results of these two approaches shows that their underlying assumptions are indeed compatible.
Resumo:
A high resolution mineralogical study (bulk-rock and clay-fraction) was carried out upon the hemipelagic strata of the Angles section (Vocontian Basin, SE France) in which the Valanginian positive C-isotope excursion occurs. To investigate sea-level fluctuations and climate change respectively, a Detrital Index (DI: (phyllosilicates and quartz)/calcite) and a Weathering Index (WI: kaolinite/(illite + chlorite)) were established and compared to second-order sea-level fluctuations. In addition, the mineralogical data were compared with the High Nutrient Index (HNI, based on calcareous nannofossil taxa) data obtained by Duchamp-Alphonse et al. (2007), in order to assess the link between the hydrolysis conditions recorded on the surrounding continents and the trophic conditions inferred for the Vocontian Basin. It appears that the mineralogical distribution along the northwestern Tethyan margin is mainly influenced by sea-level changes during the Early Valanginian (Pertransiens to Stephanophorus ammonite Zones) and by climate variations from the late Early Valanginian to the base of the Hauterivian (top of the Stephanophorus to the Radiatus ammonite Zones). The sea-level fall observed in the Pertransiens ammonite Zone (Early Valanginian) is well expressed by an increase in detrital inputs (an increase in the DI) associated with a more proximal source and a shallower marine environment, whereas the sea-level rise recorded in the Stephanophorus ammonite Zone corresponds to a decrease in detrital influx (a decrease in the DI) as the source becomes more distal and the environment deeper. Interpretation of both DI and WI, indicates that the positive C-isotope excursion (top of the Stephanophorus to the Verrucosum ammonite Zones) is associated with an increase of detrital inputs under a stable, warm and humid climate, probably related to greenhouse conditions, the strongest hydrolysis conditions being reached at the maximum of the positive C-isotope excursion. From the Verrucosum ammonite Zone to the base of the Hauterivian (Radiatus ammonite Zone) climatic conditions evolved from weak hydrolysis conditions and, most likely, a cooler climate (resulting in a decrease in detrital inputs) to a seasonal climate in which more humid seasons alternated with more arid ones. The comparison of the WI to the HNI shows that the nutrification recorded al: the Angles section from the top of the Stephanophorus to the Radiatus ammonite Zones (including the positive C-isotope shift), is associated with climatic changes in the source areas. At that time, increased nutrient inputs were generally triggered by increased weathering processes in the source areas due to acceleration in the hydrological cycle under greenhouse conditions This scenario accords with the widely questioned palaeoenvironmental model proposed by Lini et al., (1992) and suggests that increasing greenhouse conditions are the main factor that drove the palaeoenvironmental changes observed in the hemipelagic realm of the Vocontian Basin, during the Valanginian positive C-isotope shift. This high-resolution mineralogical study highlights short-term climatic changes during the Valanginian, probably associated to rapid changes in the C-cycle. Coeval Massive Parana-Etendeka flood basalt eruptions may explain such rapid perturbations. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A growing number of studies have been addressing the relationship between theory of mind (TOM) and executive functions (EF) in patients with acquired neurological pathology. In order to provide a global overview on the main findings, we conducted a systematic review on group studies where we aimed to (1) evaluate the patterns of impaired and preserved abilities of both TOM and EF in groups of patients with acquired neurological pathology and (2) investigate the existence of particular relations between different EF domains and TOM tasks. The search was conducted in Pubmed/Medline. A total of 24 articles met the inclusion criteria. We considered for analysis classical clinically accepted TOM tasks (first- and second-order false belief stories, the Faux Pas test, Happe's stories, the Mind in the Eyes task, and Cartoon's tasks) and EF domains (updating, shifting, inhibition, and access). The review suggests that (1) EF and TOM appear tightly associated. However, the few dissociations observed suggest they cannot be reduced to a single function; (2) no executive subprocess could be specifically associated with TOM performances; (3) the first-order false belief task and the Happe's story task seem to be less sensitive to neurological pathologies and less associated to EF. Even though the analysis of the reviewed studies demonstrates a close relationship between TOM and EF in patients with acquired neurological pathology, the nature of this relationship must be further investigated. Studies investigating ecological consequences of TOM and EF deficits, and intervention researches may bring further contributions to this question.
Resumo:
The evolution of a quantitative phenotype is often envisioned as a trait substitution sequence where mutant alleles repeatedly replace resident ones. In infinite populations, the invasion fitness of a mutant in this two-allele representation of the evolutionary process is used to characterize features about long-term phenotypic evolution, such as singular points, convergence stability (established from first-order effects of selection), branching points, and evolutionary stability (established from second-order effects of selection). Here, we try to characterize long-term phenotypic evolution in finite populations from this two-allele representation of the evolutionary process. We construct a stochastic model describing evolutionary dynamics at non-rare mutant allele frequency. We then derive stability conditions based on stationary average mutant frequencies in the presence of vanishing mutation rates. We find that the second-order stability condition obtained from second-order effects of selection is identical to convergence stability. Thus, in two-allele systems in finite populations, convergence stability is enough to characterize long-term evolution under the trait substitution sequence assumption. We perform individual-based simulations to confirm our analytic results.
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
Introduction: The interhemispheric asymmetries that originate from connectivity-related structuring of the cerebral cortex are compromised in schizophrenia (SZ). Recently, we have revealed the whole-head topography of EEG synchronization in SZ (Jalili et al. 2007; Knyazeva et al. 2008). Here we extended the analysis to assess the abnormality in the asymmetry of synchronization, which is further motivated by the evidence that the interhemispheric asymmetries suspected to be abnormal in SZ originate from the connectivity-related structuring of the cortex. Methods: Thirteen right-handed SZ patients and thirteen matched controls, participated in this study and the multichannel (128) EEGs were recorded for 3-5 minutes at rest. Then, Laplacian EEG (LEEG) were calculated using a 2-D spline. The LEEGs were analysis through calculating the power spectral density using Welch's average periodogram method. Furthermore, using a state-space based multivariate synchronization measure, S-estimator, we analyzed the correlate of the functional cortico-cortical connectivity in SZ patients compared to the controls. The values of S-estimator were obtained at three different special scales: first-order neighbors for each sensor location, second-order neighbors, and the whole hemisphere. The synchronization measures based on LEEG of alpha and beta bands were applied and tuned to various spatial scales including local, intraregional, and long-distance levels. To assess the between-group differences, we used a permutation version of Hotelling's T2 test. For correlation analysis, Spearman Rank Correlation was calculated. Results: Compared to the controls, who had rightward asymmetry at a local level (LEEG power), rightward anterior and leftward posterior asymmetries at an intraregional level (first- and second-order S-estimator), and rightward global asymmetry (hemispheric S-estimator), SZ patients showed generally attenuated asymmetry, the effect being strongest for intraregional synchronization. This deviation in asymmetry across the anterior-to-posterior axis is consistent with the cerebral form of the so-called Yakovlevian or anticlockwise cerebral torque. Moreover, the negative occipital and positive frontal asymmetry values suggest higher regional synchronization among the left occipital and the right frontal locations relative to their symmetrical counterparts. Correlation analysis linked the posterior intraregional and hemispheric abnormalities to the negative SZ symptoms, whereas the asymmetry of LEEG power appeared to be weakly coupled to clinical ratings. The posterior intraregional abnormalities of asymmetry were shown to increase with the duration of the disease. The tentative links between these findings and gross anatomical asymmetries, including the cerebral torque and gyrification pattern in normal subjects and SZ patients, are discussed. Conclusions: Overall, our findings reveal the abnormalities in the synchronization asymmetry in SZ patients and heavy involvement of the right hemisphere in these abnormalities. These results indicate that anomalous asymmetry of cortico-cortical connections in schizophrenia is amenable to electrophysiological analysis.
Resumo:
A series of cis-configured epoxides and aziridines containing hydrophobic moieties and amino acid esters,were synthesized as new potential inhibitors of the secreted aspartic protease 2 (SAP2) of Candida albicans. Enzyme assays revealed the N- benzyl-3-phenyl-substituted aziridines 11 and 17 as the most potent inhibitors, with second-order inhibition, rate constants (k(2)) between 56000 and 12-1000 M-1 min(-1). The compounds were shown to be pseudo-irreversible dual-mode, inhibitors: the interm ediate esterified enzyme resulting from nucleophilic ring opening was hydrolyzed and yielded amino alcohols as transition state-mimetic reversible inhibitors. The results of docking studies with the ring-closed aziridine forms of the inhibitors suggest binding modes mainly dominated by hydrophobic interactions with the S1, S1' S2, and S2' subsites of the protease, and docking studies with the processed amino alcohol forms predict additional hydrogen bonds of the new hydroxy group to the active site Asp residues. C. albicans growth assays showed the compounds to decrease SAP2-dependent growth while not affecting SAP2-independent growth.
Resumo:
With improved B 0 homogeneity along with satisfactory gradient performance at high magnetic fields, snapshot gradient-recalled echo-planar imaging (GRE-EPI) would perform at long echo times (TEs) on the order of T2*, which intrinsically allows obtaining strongly T2*-weighted images with embedded substantial anatomical details in ultrashort time. The aim of this study was to investigate the feasibility and quality of long TE snapshot GRE-EPI images of rat brain at 9.4 T. When compensating for B 0 inhomogeneities, especially second-order shim terms, a 200 x 200 microm2 in-plane resolution image was reproducibly obtained at long TE (>25 ms). The resulting coronal images at 30 ms had diminished geometric distortions and, thus, embedded substantial anatomical details. Concurrently with the very consistent stability, such GRE-EPI images should permit to resolve functional data not only with high specificity but also with substantial anatomical details, therefore allowing coregistration of the acquired functional data on the same image data set.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.