866 resultados para Characteristic Initial Value Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A transmission problem involving two Euler-Bernoulli equations modeling the vibrations of a composite beam is studied. Assuming that the beam is clamped at one extremity, and resting on an elastic bearing at the other extremity, the existence of a unique global solution and decay rates of the energy are obtained by adding just one damping device at the end containing the bearing mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The olive oil extraction industry is responsible for the production of high quantities of vegetation waters, represented by the constitutive water of the olive fruit and by the water used during the process. This by-product represent an environmental problem in the olive’s cultivation areas because of its high content of organic matter, with high value of BOD5 and COD. For that reason the disposal of the vegetation water is very difficult and needs a previous depollution. The organic matter of vegetation water mainly consists of polysaccharides, sugars, proteins, organic acids, oil and polyphenols. This last compounds are the principal responsible for the pollution problems, due to their antimicrobial activity, but, at the same time they are well known for their antioxidant properties. The most concentrate phenolic compounds in waters and also in virgin olive oils are secoiridoids like oleuropein, demethyloleuropein and ligstroside derivatives (the dialdehydic form of elenolic acid linked to 3,4-DHPEA, or p-HPEA (3,4-DHPEA-EDA or p-HPEA-EDA) and an isomer of the oleuropein aglycon (3,4-DHPEA-EA). The management of the olive oil vegetation water has been extensively investigated and several different valorisation methods have been proposed, such as the direct use as fertilizer or the transformation by physico-chemical or biological treatments. During the last years researchers focused their interest on the recovery of the phenolic fraction from this waste looking for its exploitation as a natural antioxidant source. At the present only few contributes have been aimed to the utilization for a large scale phenols recovery and further investigations are required for the evaluation of feasibility and costs of the proposed processes. The present PhD thesis reports a preliminary description of a new industrial scale process for the recovery of the phenolic fraction from olive oil vegetation water treated with enzymes, by direct membrane filtration (microfiltration/ultrafiltration with a cut-off of 250 KDa, ultrafiltration with a cut-off of 7 KDa/10 KDa and nanofiltration/reverse osmosis), partial purification by the use of a purification system based on SPE analysis and by a liquid-liquid extraction system (LLE) with contemporary reduction of the pollution related problems. The phenolic fractions of all the samples obtained were qualitatively and quantitatively by HPLC analysis. The work efficiency in terms of flows and in terms of phenolic recovery gave good results. The final phenolic recovery is about 60% respect the initial content in the vegetation waters. The final concentrate has shown a high content of phenols that allow to hypothesize a possible use as zootechnic nutritional supplements. The purification of the final concentrate have garanteed an high purity level of the phenolic extract especially in SPE analysis by the use of XAD-16 (73% of the total phenolic content of the concentrate). This purity level could permit a future food industry employment such as food additive, or, thanks to the strong antioxidant activity, it would be also use in pharmaceutical or cosmetic industry. The vegetation water depollutant activity has brought good results, as a matter of fact the final reverse osmosis permeate has a low pollutant rate in terms of COD and BOD5 values (2% of the initial vegetation water), that could determinate a recycling use in the virgin olive oil mechanical extraction system producing a water saving and reducing thus the oil industry disposal costs .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prognosis of patients in whom pulmonary embolism (PE) is suspected but ruled out is poorly understood. We evaluated whether the initial assessment of clinical probability of PE could help to predict the prognosis for these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnostic performance of isolated high-grade prostatic intraepithelial neoplasia in prostatic biopsies has recently been questioned, and molecular analysis of high-grade prostatic intraepithelial neoplasia has been proposed for improved prediction of prostate cancer. Here, we retrospectively studied the value of isolated high-grade prostatic intraepithelial neoplasia and the immunohistochemical markers ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67 for better risk stratification of high-grade prostatic intraepithelial neoplasia in our local Swiss population. From an initial 165 diagnoses of isolated high-grade prostatic intraepithelial neoplasia, we refuted 61 (37%) after consensus expert review. We used 30 reviewed high-grade prostatic intraepithelial neoplasia cases with simultaneous biopsy prostate cancer as positive controls. Rebiopsies were performed in 66 patients with isolated high-grade prostatic intraepithelial neoplasia, and the median time interval between initial and repeat biopsy was 3 months. Twenty (30%) of the rebiopsies were positive for prostate cancer, and 10 (15%) showed persistent isolated high-grade prostatic intraepithelial neoplasia. Another 2 (3%) of the 66 patients were diagnosed with prostate cancer in a second rebiopsy. Mean prostate-specific antigen serum levels did not significantly differ between the 22 patients with prostate cancer and the 44 without prostate cancer in rebiopsies, and the 30 positive control patients, respectively (median values, 8.1, 7.7, and 8.8 ng/mL). None of the immunohistochemical markers, including ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67, revealed a statistically significant association with the risk of prostate cancer in repeat biopsies. Taken together, the 33% risk of being diagnosed with prostate cancer after a diagnosis of high-grade prostatic intraepithelial neoplasia justifies rebiopsy, at least in our not systematically prostate-specific antigen-screened population. There is not enough evidence that immunohistochemical markers can reproducibly stratify the risk of prostate cancer after a diagnosis of isolated high-grade prostatic intraepithelial neoplasia.