941 resultados para Characteristic Initial Value Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The olive oil extraction industry is responsible for the production of high quantities of vegetation waters, represented by the constitutive water of the olive fruit and by the water used during the process. This by-product represent an environmental problem in the olive’s cultivation areas because of its high content of organic matter, with high value of BOD5 and COD. For that reason the disposal of the vegetation water is very difficult and needs a previous depollution. The organic matter of vegetation water mainly consists of polysaccharides, sugars, proteins, organic acids, oil and polyphenols. This last compounds are the principal responsible for the pollution problems, due to their antimicrobial activity, but, at the same time they are well known for their antioxidant properties. The most concentrate phenolic compounds in waters and also in virgin olive oils are secoiridoids like oleuropein, demethyloleuropein and ligstroside derivatives (the dialdehydic form of elenolic acid linked to 3,4-DHPEA, or p-HPEA (3,4-DHPEA-EDA or p-HPEA-EDA) and an isomer of the oleuropein aglycon (3,4-DHPEA-EA). The management of the olive oil vegetation water has been extensively investigated and several different valorisation methods have been proposed, such as the direct use as fertilizer or the transformation by physico-chemical or biological treatments. During the last years researchers focused their interest on the recovery of the phenolic fraction from this waste looking for its exploitation as a natural antioxidant source. At the present only few contributes have been aimed to the utilization for a large scale phenols recovery and further investigations are required for the evaluation of feasibility and costs of the proposed processes. The present PhD thesis reports a preliminary description of a new industrial scale process for the recovery of the phenolic fraction from olive oil vegetation water treated with enzymes, by direct membrane filtration (microfiltration/ultrafiltration with a cut-off of 250 KDa, ultrafiltration with a cut-off of 7 KDa/10 KDa and nanofiltration/reverse osmosis), partial purification by the use of a purification system based on SPE analysis and by a liquid-liquid extraction system (LLE) with contemporary reduction of the pollution related problems. The phenolic fractions of all the samples obtained were qualitatively and quantitatively by HPLC analysis. The work efficiency in terms of flows and in terms of phenolic recovery gave good results. The final phenolic recovery is about 60% respect the initial content in the vegetation waters. The final concentrate has shown a high content of phenols that allow to hypothesize a possible use as zootechnic nutritional supplements. The purification of the final concentrate have garanteed an high purity level of the phenolic extract especially in SPE analysis by the use of XAD-16 (73% of the total phenolic content of the concentrate). This purity level could permit a future food industry employment such as food additive, or, thanks to the strong antioxidant activity, it would be also use in pharmaceutical or cosmetic industry. The vegetation water depollutant activity has brought good results, as a matter of fact the final reverse osmosis permeate has a low pollutant rate in terms of COD and BOD5 values (2% of the initial vegetation water), that could determinate a recycling use in the virgin olive oil mechanical extraction system producing a water saving and reducing thus the oil industry disposal costs .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prognosis of patients in whom pulmonary embolism (PE) is suspected but ruled out is poorly understood. We evaluated whether the initial assessment of clinical probability of PE could help to predict the prognosis for these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnostic performance of isolated high-grade prostatic intraepithelial neoplasia in prostatic biopsies has recently been questioned, and molecular analysis of high-grade prostatic intraepithelial neoplasia has been proposed for improved prediction of prostate cancer. Here, we retrospectively studied the value of isolated high-grade prostatic intraepithelial neoplasia and the immunohistochemical markers ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67 for better risk stratification of high-grade prostatic intraepithelial neoplasia in our local Swiss population. From an initial 165 diagnoses of isolated high-grade prostatic intraepithelial neoplasia, we refuted 61 (37%) after consensus expert review. We used 30 reviewed high-grade prostatic intraepithelial neoplasia cases with simultaneous biopsy prostate cancer as positive controls. Rebiopsies were performed in 66 patients with isolated high-grade prostatic intraepithelial neoplasia, and the median time interval between initial and repeat biopsy was 3 months. Twenty (30%) of the rebiopsies were positive for prostate cancer, and 10 (15%) showed persistent isolated high-grade prostatic intraepithelial neoplasia. Another 2 (3%) of the 66 patients were diagnosed with prostate cancer in a second rebiopsy. Mean prostate-specific antigen serum levels did not significantly differ between the 22 patients with prostate cancer and the 44 without prostate cancer in rebiopsies, and the 30 positive control patients, respectively (median values, 8.1, 7.7, and 8.8 ng/mL). None of the immunohistochemical markers, including ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67, revealed a statistically significant association with the risk of prostate cancer in repeat biopsies. Taken together, the 33% risk of being diagnosed with prostate cancer after a diagnosis of high-grade prostatic intraepithelial neoplasia justifies rebiopsy, at least in our not systematically prostate-specific antigen-screened population. There is not enough evidence that immunohistochemical markers can reproducibly stratify the risk of prostate cancer after a diagnosis of isolated high-grade prostatic intraepithelial neoplasia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SETTING: Correctional settings and remand prisons. OBJECTIVE: To critically discuss calculations for epidemiological indicators of the tuberculosis (TB) burden in prisons and to provide recommendations to improve study comparability. METHODS: A hypothetical data set illustrates issues in determining incidence and prevalence. The appropriate calculation of the incidence rate is presented and problems arising from cross-sectional surveys are clarifi ed. RESULTS: Cases recognized during the fi rst 3 months should be classifi ed as prevalent at entry and excluded from any incidence rate calculation. The numerator for the incidence rate includes persons detected as having developed TB during a specifi ed period of time subsequent to the initial 3 months. The denominator is persontime at risk from 3 months onward to the end point (TB or end of the observation period). Preferably, entry time, exit time and event time are known for each inmate to determine person-time at risk. Failing that, an approximation consists of the sum of monthly head counts, excluding prevalent cases and those persons no longer at risk from both the numerator and the denominator. CONCLUSIONS: The varying durations of inmate incarceration in prisons pose challenges for quantifying the magnitude of the TB problem in the inmate population. Recommendations are made to measure incidence and prevalence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dental erosion is the non-carious dental substance loss induced by direct impact of exogenous or endogenous acids. It results in a loss of dental hard tissue, which can be serious in some groups, such as those with eating disorders, in patients with gastroesophageal reflux disease, and also in persons consuming high amounts of acidic drinks and foodstuffs. For these persons, erosion can impair their well-being, due to changes in appearance and/or loss of function of the teeth, e.g., the occurrence of hypersensitivity of teeth if the dentin is exposed. If erosion reaches an advanced stage, time- and money-consuming therapies may be necessary. The therapy, in turn, poses a challenge for the dentist, particularly if the defects are diagnosed at an advanced stage. While initial and moderate defects can mostly be treated non- or minimally invasively, severe defects often require complex therapeutic strategies, which often entail extensive loss of dental hard tissue due to preparatory measures. A major goal should therefore be to diagnose dental erosion at an early stage, to avoid functional and esthetic impairments as well as pain sensations and to ensure longevity of the dentition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to improve the ability to link chemical exposure to toxicological and ecological effects, aquatic toxicology will have to move from observing what chemical concentrations induce adverse effects to more explanatory approaches, that are concepts which build on knowledge of biological processes and pathways leading from exposure to adverse effects, as well as on knowledge on stressor vulnerability as given by the genetic, physiological and ecological (e.g., life history) traits of biota. Developing aquatic toxicology in this direction faces a number of challenges, including (i) taking into account species differences in toxicant responses on the basis of the evolutionarily developed diversity of phenotypic vulnerability to environmental stressors, (ii) utilizing diversified biological response profiles to serve as biological read across for prioritizing chemicals, categorizing them according to modes of action, and for guiding targeted toxicity evaluation; (iii) prediction of ecological consequences of toxic exposure from knowledge of how biological processes and phenotypic traits lead to effect propagation across the levels of biological hierarchy; and (iv) the search for concepts to assess the cumulative impact of multiple stressors. An underlying theme in these challenges is that, in addition to the question of what the chemical does to the biological receptor, we should give increasing emphasis to the question how the biological receptor handles the chemicals, i.e., through which pathways the initial chemical-biological interaction extends to the adverse effects, how this extension is modulated by adaptive or compensatory processes as well as by phenotypic traits of the biological receptor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project intertwines philosophical and historico-literary themes, taking as its starting point the concept of tragic consciousness inherent in the epoch of classicism. The research work makes use of ontological categories in order to describe the underlying principles of the image of the world which was created in philosophical and scientific theories of the 17th century as well as in contemporary drama. Using these categories brought Mr. Vilk to the conclusion that the classical picture of the world implied a certain dualism; not the Manichaean division between light and darkness but the discrimination between nature and absolute being, i.e. God. Mr. Vilk begins with an examination of the philosophical essence of French classical theatre of the XVII and XVIII centuries. The history of French classical tragedy can be divided into three periods: from the mid 17th to early 19th centuries when it triumphed all over France and exerted a powerful influence over almost all European countries; followed by the period of its rejection by the Romantics, who declared classicism to be "artificial and rational"; and finally our own century which has taken a more moderate line. Nevertheless, French classical tragedy has never fully recovered its status. Instead, it is ancient tragedy and the works of Shakespeare that are regarded to be the most adequate embodiment of the tragic. Consequently they still provoke a great number of new interpretations ranging from specialised literary criticism to more philosophical rumination. An important feature of classical tragedy is a system of rules and unities which reveals a hidden ontological structure of the world. The ontological picture of the dramatic world can be described in categories worked out by medieval philosophy - being, essence and existence. The first category is to be understood as a tendency toward permanency and stability (within eternity) connected with this or that fragment of dramatic reality. The second implies a certain set of permanent elements that make up the reality. And the third - existence - should be understood as "an act of being", as a realisation of permanently renewed processes of life. All of these categories can be found in every artistic reality but the accents put on one or another and their interrelations create different ontological perspectives. Mr. Vilk plots the movement of thought, expressed in both philosophical and scientific discourses, away from Aristotle's essential forms, and towards a prioritising of existence, and shows how new forms of literature and drama structured the world according to these evolving requirements. At the same time the world created in classical tragedy fully preserves another ontological paradigm - being - as a fundamental permanence. As far as the tragic hero's motivations are concerned this paradigm is revealed in the dedication of his whole self to some cause, and his oath of fidelity, attitudes which shape his behaviour. It may be the idea of the State, or personal honour, or something borrowed from the emotional sphere, passionate love. Mr. Vilk views the conflicting ambivalence of existence and being, duty as responsibility and duty as fidelity, as underlying the main conflict of classical tragedy of the 17th century. Having plotted the movement of the being/existence duality through its manifestations in 17th century tragedy, Mr. Vilk moves to the 18th century, when tragedy took a philosophical turn. A dualistic view of the world became supplanted by the Enlightenment idea of a natural law, rooted in nature. The main point of tragedy now was to reveal that such conflicts as might take place had an anti-rational nature, that they arose as the result of a kind of superstition caused by social reasons. These themes Mr. Vilk now pursues through Russian dramatists of the 18th and early 19th centuries. He begins with Sumarakov, whose philosophical thought has a religious bias. According to Sumarakov, the dualism of the divineness and naturalness of man is on the one hand an eternal paradox, and on the other, a moral challenge for humans to try to unite the two opposites. His early tragedies are not concerned with social evils or the triumph of natural feelings and human reason, but rather the tragic disharmony in the nature of man and the world. Mr Vilk turns next to the work of Kniazhnin. He is particularly keen to rescue his reputation from the judgements of critics who accuse him of being imitative, and in order to do so, analyses in detail the tragedy "Dido", in which Kniazhnin makes an attempt to revive the image of great heroes and city-founders. Aeneas represents the idea of the "being" of Troy, his destiny is the re-establishment of the city (the future Rome). The moral aspect behind this idea is faithfulness, he devotes himself to Gods. Dido is also the creator of a city, endowed with "natural powers" and abilities, but her creation is lacking internal stability grounded in "being". The unity of the two motives is only achieved through Dido's sacrifice of herself and her city to Aeneus. Mr Vilk's next subject is Kheraskov, whose peculiarity lies in the influence of free-mason mysticism on his work. This section deals with one of the most important philosophical assumptions contained in contemporary free-mason literature of the time - the idea of the trinitarian hierarchy inherent in man and the world: body - soul - spirit, and nature - law - grace. Finally, Mr. Vilk assess the work of Ozerov, the last major Russian tragedian. The tragedies which earned him fame, "Oedipus in Athens", "Fingal" and "Dmitri Donskoi", present a compromise between the Enlightenment's emphasis on harmony and ontological tragic conflict. But it is in "Polixene" that a real meeting of the Russian tradition with the age-old history of the genre takes place. The male and female characters of "Polixene" distinctly express the elements of "being" and "existence". Each of the participants of the conflict possesses some dominant characteristic personifying a certain indispensable part of the moral world, a certain "virtue". But their independent efforts are unable to overcome the ontological gap separating them. The end of the tragedy - Polixene's sacrificial self-immolation - paradoxically combines the glorification of each party involved in the conflict, and their condemnation. The final part of Mr. Vilk's research deals with the influence of "Polixene" upon subsequent dramatic art. In this respect Katenin's "Andromacha", inspired by "Polixene", is important to mention. In "Andromacha" a decisive divergence from the principles of the philosophical tragedy of Russian classicism and the ontology of classicism occurs: a new character appears as an independent personality, directed by his private interest. It was Katenin who was to become the intermediary between Pushkin and classical tragedy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coronary artery calcium (CAC) score is a readily and widely available tool for the noninvasive diagnosis of atherosclerotic coronary artery disease (CAD). The aim of this study was to investigate the added value of the CAC score as an adjunct to gated SPECT for the assessment of CAD in an intermediate-risk population. METHODS: Seventy-seven prospectively recruited patients with intermediate risk (as determined by the Framingham Heart Study 10-y CAD risk score) and referred for coronary angiography because of suspected CAD underwent stress (99m)Tc-tetrofosmin SPECT myocardial perfusion imaging (MPI) and CT CAC scoring within 2 wk before coronary angiography. The sensitivity and specificity of SPECT alone and of the combination of the 2 methods (SPECT plus CAC score) in demonstrating significant CAD (>/=50% stenosis on coronary angiography) were compared. RESULTS: Forty-two (55%) of the 77 patients had CAD on coronary angiography, and 35 (45%) had abnormal SPECT results. The CAC score was significantly higher in subjects with perfusion abnormalities than in those who had normal SPECT results (889 +/- 836 [mean +/- SD] vs. 286 +/- 335; P < 0.0001). Similarly, with rising CAC scores, a larger percentage of patients had CAD. Receiver-operating-characteristic analysis showed that a CAC score of greater than or equal to 709 was the optimal cutoff for detecting CAD missed by SPECT. SPECT alone had a sensitivity and a specificity for the detection of significant CAD of 76% and 91%, respectively. Combining SPECT with the CAC score (at a cutoff of 709) improved the sensitivity of SPECT (from 76% to 86%) for the detection of CAD, in association with a nonsignificant decrease in specificity (from 91% to 86%). CONCLUSION: The CAC score may offer incremental diagnostic information over SPECT data for identifying patients with significant CAD and negative MPI results.