974 resultados para Explicit guarantees
Resumo:
This dissertation provides a novel theory of securitization based on intermediaries minimizing the moral hazard that insiders can misuse assets held on-balance sheet. The model predicts how intermediaries finance different assets. Under deposit funding, the moral hazard is greatest for low-risk assets that yield sizable returns in bad states of nature; under securitization, it is greatest for high-risk assets that require high guarantees and large reserves. Intermediaries thus securitize low-risk assets. In an extension, I identify a novel channel through which government bailouts exacerbate the moral hazard and reduce total investment irrespective of the funding mode. This adverse effect is stronger under deposit funding, implying that intermediaries finance more risky assets off-balance sheet. The dissertation discusses the implications of different forms of guarantees. With explicit guarantees, banks securitize assets with either low information-intensity or low risk. By contrast, with implicit guarantees, banks only securitize assets with high information-intensity and low risk. Two extensions to the benchmark static and dynamic models are discussed. First, an extension to the static model studies the optimality of tranching versus securitization with guarantees. Tranching eliminates agency costs but worsens adverse selection, while securitization with guarantees does the opposite. When the quality of underlying assets in a certain security market is sufficiently heterogeneous, and when the highest quality assets are perceived to be sufficiently safe, securitization with guarantees dominates tranching. Second, in an extension to the dynamic setting, the moral hazard of misusing assets held on-balance sheet naturally gives rise to the moral hazard of weak ex-post monitoring in securitization. The use of guarantees reduces the dependence of banks' ex-post payoffs on monitoring efforts, thereby weakening monitoring incentives. The incentive to monitor under securitization with implicit guarantees is the weakest among all funding modes, as implicit guarantees allow banks to renege on their monitoring promises without being declared bankrupt and punished.
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.
Resumo:
The corporative portals, enabled by Information Technology and Communication tools, provide the integration of heterogeneous data proceeding from internal information systems, which are available for access and sharing of the interested community. They can be considered an important instrument of explicit knowledge evaluation in the. organization, once they allow faster and,safer, information exchanges, enabling a healthful collaborative environment. In the specific case of major Brazilian universities, the corporate portals assume a basic aspect; therefore they offer an enormous variety and amount of information and knowledge, due to the multiplicity of their activities This. study aims to point out important aspects of the explicit knowledge expressed by the searched universities; by the analysis, of the content offered in their corporative portals` This is an exploratory study made through, direct observation of the existing contents in the corporative portals of two public universities as. Well as three private ones. A. comparative analysis of the existing contents in these portals was carried through;. it can be useful to evaluate its use as factor of optimization of the generated explicit knowledge in the university. As results, the existence of important differences, could be verified in the composition and in the content of the corporative portals of the public universities compared to the private institutions. The main differences are about the kind of services and the destination-of the,information that have as focus different public-target. It-could also be concluded that the searched private universities, focus, on the processes related to the attendance of the students, the support for the courses as well as the spreading of information to the public interested in joining the institution; whereas the anal public universities prioritize more specific information, directed to,the dissemination-of the research, developed internally or with institutional objectives.
Resumo:
Service offerings are largely intangible in nature. Customers are thus unable to assess the purchase outcome prior to experience, rendering the risk of possible customer dissatisfaction very high. It is argued that the concept of service guarantees proposed by services management theory can be effectively utilised to reduce the perceived risk of dissatisfaction for the customer in service organisations. Additionally, it is suggested that service guarantees force management to undertake activities which elevate the superiority of the organisation in the eyes of the customer and, thus, the opportunity to transform one-time customers into loyal ones. The purpose of this paper is twofold: first, to illustrate how customers’ behavioural intentions can be influenced by the use of a service guarantee; and second, to outline a systematic process that can help service business managers to develop and implement an effective service guarantee. This research highlights the numerous benefits available to service organisations by utilising the service guarantee as a strategic tool. Some of the important management implications are also outlined.
Resumo:
Reaction between 5-(4-amino-2-thiabutyl)-5-methyl-3,7-dithianonane-1, 9-diamine (N3S3) and 5- methyl-2,2-bipyridine-5-carbaldehyde and subsequent reduction of the resulting imine with sodium borohydride results in a potentially ditopic ligand (L). Treatment of L with one equivalent of an iron( II) salt led to the monoprotonated complex [Fe(HL)](3+), isolated as the hexafluorophosphate salt. The presence of characteristic bands for the tris( bipyridyl) iron( II) chromophore in the UV/vis spectrum indicated that the iron( II) atom is coordinated octahedrally by the three bipyridyl (bipy) groups. The [Fe( bipy) 3] moiety encloses a cavity composed of the N3S3 portion of the ditopic ligand. The mononuclear and monomeric nature of the complex [Fe(HL)](3+) has been established also by accurate mass analysis. [Fe(HL)](3+) displays reduced stability to base compared with the complex [Fe(bipy)(3)](2+). In aqueous solution [Fe(HL)](3+) exhibits irreversible electrochemical behaviour with an oxidation wave ca. 60 mV to more positive potential than [Fe(bipy)(3)](2+). Investigations of the interaction of [Fe(L)](2+) with copper( II), iron( II), and mercury( II) using mass spectroscopic and potentiometric methods suggested that where complexation occurred, fewer than six of the N3S3 cavity donors were involved. The high affinity of the complex [Fe(L)](2+) for protons is one reason suggested to contribute to the reluctance to coordinate a second metal ion.
Resumo:
We detail the automatic construction of R matrices corresponding to (the tensor products of) the (O-m\alpha(n)) families of highest-weight representations of the quantum superalgebras Uq[gl(m\n)]. These representations are irreducible, contain a free complex parameter a, and are 2(mn)-dimensional. Our R matrices are actually (sparse) rank 4 tensors, containing a total of 2(4mn) components, each of which is in general an algebraic expression in the two complex variables q and a. Although the constructions are straightforward, we describe them in full here, to fill a perceived gap in the literature. As the algorithms are generally impracticable for manual calculation, we have implemented the entire process in MATHEMATICA; illustrating our results with U-q [gl(3\1)]. (C) 2002 Published by Elsevier Science B.V.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
O presente trabalho reúne os elementos que compõem a atual concepção de assistência social no Brasil, a partir da promulgação da constituição de 1988, quando a assistência social foi reconhecida pela primeira vez como direito de cidadania e dever legal do Estado, garantido pela Lei Suprema. Nesta lei, a assistência social pressupunha uma lógica de pleno emprego, destinada, portanto, prioritariamente aos incapazes para o trabalho. No entanto, em um contexto de desemprego estrutural esta passa a ser compreendida em termos de garantias de seguranças, buscando assumir a proteção social daqueles capazes para o trabalho, tendo em vista a deterioração do mercado de trabalho, restrição de oportunidades e de renda e o crescimento progressivo do desemprego e da informalidade. A ideia central é a de que se trata de uma descrição crítica da concepção de assistência social no Brasil, problematizando cada um de seus argumentos mais explícitos com o intuito de revelar uma intencionalidade vinculada à uma perspectiva de Estado. Utilizamos o termo concepção no sentido de conceber, pensar, sentir, entender ou interpretar algo. A assistência social, na atualidade, responde a um único processo que reúne aspectos históricos, econômicos, políticos, sociais e ideológicos e neste sentido, representa uma concepção de mundo e um projeto de sociedade, defendido pela classe dominante, pautado pela exploração do trabalho. A atual concepção de assistência social segue, portanto, uma nova forma de política social a partir da perspectiva de desenvolvimento humano e combate à pobreza em que a grande ênfase tem sido a de retirar as discussões e a intervenção na pobreza do âmbito da questão social, alocando-a nos indivíduos e em suas “incapacidades”. A assistência social ao assumir a responsabilidade ou coresponsabilidade no desenvolvimento de capacidades dos indivíduos sinaliza a tendência de uma nova concepção de bem-estar social.
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
OBJECTIVE: To develop an instrument to assess discrimination effects on health outcomes and behaviors, capable of distinguishing harmful differential treatment effects from their interpretation as discriminatory events. METHODS: Successive versions of an instrument were developed based on a systematic review of instruments assessing racial discrimination, focus groups and review by a panel comprising seven experts. The instrument was refined using cognitive interviews and pilot-testing. The final version of the instrument was administered to 424 undergraduate college students in the city of Rio de Janeiro, Southeastern Brazil, in 2010. Structural dimensionality, two types of reliability and construct validity were analyzed. RESULTS: Exploratory factor analysis corroborated the hypothesis of the instrument's unidimensionality, and seven experts verified its face and content validity. The internal consistency was 0.8, and test-retest reliability was higher than 0.5 for 14 out of 18 items. The overall score was higher among socially disadvantaged individuals and correlated with adverse health behaviors/conditions, particularly when differential treatments were attributed to discrimination. CONCLUSIONS: These findings indicate the validity and reliability of the instrument developed. The proposed instrument enables the investigation of novel aspects of the relationship between discrimination and health.
Resumo:
After a historical introduction, the bulk of the thesis concerns the study of a declarative semantics for logic programs. The main original contributions are: ² WFSX (Well–Founded Semantics with eXplicit negation), a new semantics for logic programs with explicit negation (i.e. extended logic programs), which compares favourably in its properties with other extant semantics. ² A generic characterization schema that facilitates comparisons among a diversity of semantics of extended logic programs, including WFSX. ² An autoepistemic and a default logic corresponding to WFSX, which solve existing problems of the classical approaches to autoepistemic and default logics, and clarify the meaning of explicit negation in logic programs. ² A framework for defining a spectrum of semantics of extended logic programs based on the abduction of negative hypotheses. This framework allows for the characterization of different levels of scepticism/credulity, consensuality, and argumentation. One of the semantics of abduction coincides with WFSX. ² O–semantics, a semantics that uniquely adds more CWA hypotheses to WFSX. The techniques used for doing so are applicable as well to the well–founded semantics of normal logic programs. ² By introducing explicit negation into logic programs contradiction may appear. I present two approaches for dealing with contradiction, and show their equivalence. One of the approaches consists in avoiding contradiction, and is based on restrictions in the adoption of abductive hypotheses. The other approach consists in removing contradiction, and is based in a transformation of contradictory programs into noncontradictory ones, guided by the reasons for contradiction.