671 resultados para UNIQUENESS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estetyka w archeologii. Antropomorfizacje w pradziejach i starożytności, eds. E. Bugaj, A. P. Kowalski, Poznań: Wydawnictwo Poznańskie.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weak references provide the programmer with limited control over the process of memory management. By using them, a programmer can make decisions based on previous actions that are taken by the garbage collector. Although this is often helpful, the outcome of a program using weak references is less predictable due to the nondeterminism they introduce in program evaluation. It is therefore desirable to have a framework of formal tools to reason about weak references and programs that use them. We present several calculi that formalize various aspects of weak references, inspired by their implementation in Java. We provide a calculus to model multiple levels of non-strong references, where a different garbage collection policy is applied to each level. We consider different collection policies such as eager collection and lazy collection. Similar to the way they are implemented in Java, we give the semantics of eager collection to weak references and the semantics of lazy collection to soft references. Moreover, we condition garbage collection on the availability of time and space resources. While time constraints are used in order to restrict garbage collection, space constraints are used in order to trigger it. Finalizers are a problematic feature in Java, especially when they interact with weak references. We provide a calculus to model finalizer evaluation. Since finalizers have little meaning in a language without side-effect, we introduce a limited form of side effect into the calculus. We discuss determinism and the separate notion of uniqueness of (evaluation) outcome. We show that in our calculus, finalizer evaluation does not affect uniqueness of outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with uniformly convergent finite element and finite difference methods for numerically solving singularly perturbed two-point boundary value problems. We examine the following four problems: (i) high order problem of reaction-diffusion type; (ii) high order problem of convection-diffusion type; (iii) second order interior turning point problem; (iv) semilinear reaction-diffusion problem. Firstly, we consider high order problems of reaction-diffusion type and convection-diffusion type. Under suitable hypotheses, the coercivity of the associated bilinear forms is proved and representation results for the solutions of such problems are given. It is shown that, on an equidistant mesh, polynomial schemes cannot achieve a high order of convergence which is uniform in the perturbation parameter. Piecewise polynomial Galerkin finite element methods are then constructed on a Shishkin mesh. High order convergence results, which are uniform in the perturbation parameter, are obtained in various norms. Secondly, we investigate linear second order problems with interior turning points. Piecewise linear Galerkin finite element methods are generated on various piecewise equidistant meshes designed for such problems. These methods are shown to be convergent, uniformly in the singular perturbation parameter, in a weighted energy norm and the usual L2 norm. Finally, we deal with a semilinear reaction-diffusion problem. Asymptotic properties of solutions to this problem are discussed and analysed. Two simple finite difference schemes on Shishkin meshes are applied to the problem. They are proved to be uniformly convergent of second order and fourth order respectively. Existence and uniqueness of a solution to both schemes are investigated. Numerical results for the above methods are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coastal lagoons are defined as shallow coastal water bodies partially separated from the adjacent sea by a restrictive barrier. Coastal lagoons are protected under Annex I of the European Habitats Directive (92/43/EEC). Lagoons are also considered to be “transitional water bodies” and are therefore included in the “register of protected areas” under the Water Framework Directive (2000/60/EC). Consequently, EU member states are required to establish monitoring plans and to regularly report on lagoon condition and conservation status. Irish lagoons are considered relatively rare and unusual because of their North Atlantic, macrotidal location on high energy coastlines and have received little attention. This work aimed to assess the physicochemical and ecological status of three lagoons, Cuskinny, Farranamanagh and Toormore, on the southwest coast of Ireland. Baseline salinity, nutrient and biological conditions were determined in order to provide reference conditions to detect perturbations, and to inform future maintenance of ecosystem health. Accumulation of organic matter is an increasing pressure in coastal lagoon habitats worldwide, often compounding existing eutrophication problems. This research also aimed to investigate the in situ decomposition process in a lagoon habitat together with exploring the associated invertebrate assemblages. Re-classification of the lagoons, under the guidelines of the Venice system for the classifications of marine waters according to salinity, was completed by taking spatial and temporal changes in salinity regimes into consideration. Based on the results of this study, Cuskinny, Farranamanagh and Toormore lagoons are now classified as mesohaline (5 ppt – 18 ppt), oligohaline (0.5 ppt – 5 ppt) and polyhaline (18 ppt – 30 ppt), respectively. Varying vertical, longitudinal and transverse salinity patterns were observed in the three lagoons. Strong correlations between salinity and cumulative rainfall highlighted the important role of precipitation in controlling the lagoon environment. Maximum effect of precipitation on the salinity of the lagoon was observed between four and fourteen days later depending on catchment area geology, indicating the uniqueness of each lagoon system. Seasonal nutrient patterns were evident in the lagoons. Nutrient concentrations were found to be reflective of the catchment area and the magnitude of the freshwater inflow. Assessment based on the Redfield molar ratio indicated a trend towards phosphorus, rather than nitrogen, limitation in Irish lagoons. Investigation of the decomposition process in Cuskinny Lagoon revealed that greatest biomass loss occurred in the winter season. Lowest biomass loss occurred in spring, possibly due to the high density of invertebrates feeding on the thick microbial layer rather than the decomposing litter. It has been reported that the decomposition of plant biomass is highest in the preferential distribution area of the plant species; however, no similar trend was observed in this study with the most active zones of decomposition varying spatially throughout the seasons. Macroinvertebrate analysis revealed low species diversity but high abundance, indicating the dominance of a small number of species. Invertebrate assemblages within the lagoon varied significantly from communities in the adjacent freshwater or marine environments. Although carried out in coastal lagoons on the southwest coast of Ireland, it is envisaged that the overall findings of this study have relevance throughout the entire island of Ireland and possibly to many North Atlantic coastal lagoon ecosystems elsewhere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Opioids are efficacious and cost-effective analgesics, but tolerance limits their effectiveness. This paper does not present any new clinical or experimental data but demonstrates that there exist ascending sensory pathways that contain few opioid receptors. These pathways are located by brain PET scans and spinal cord autoradiography. These nonopioid ascending pathways include portions of the ventral spinal thalamic tract originating in Rexed layers VI-VIII, thalamocortical fibers that project to the primary somatosensory cortex (S1), and possibly a midline dorsal column visceral pathway. One hypothesis is that opioid tolerance and opioid-induced hyperalgesia may be caused by homeostatic upregulation during opioid exposure of nonopioid-dependent ascending pain pathways. Upregulation of sensory pathways is not a new concept and has been demonstrated in individuals impaired with deafness or blindness. A second hypothesis is that adjuvant nonopioid therapies may inhibit ascending nonopioid-dependent pathways and support the clinical observations that monotherapy with opioids usually fails. The uniqueness of opioid tolerance compared to tolerance associated with other central nervous system medications and lack of tolerance from excess hormone production is discussed. Experimental work that could prove or disprove the concepts as well as flaws in the concepts is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The key problems in discussing stochastic monotonicity and duality for continuous time Markov chains are to give the criteria for existence and uniqueness and to construct the associated monotone processes in terms of their infinitesimal q -matrices. In their recent paper, Chen and Zhang [6] discussed these problems under the condition that the given q-matrix Q is conservative. The aim of this paper is to generalize their results to a more general case, i.e., the given q-matrix Q is not necessarily conservative. New problems arise 'in removing the conservative assumption. The existence and uniqueness criteria for this general case are given in this paper. Another important problem, the construction of all stochastically monotone Q-processes, is also considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new structure with the special property that an instantaneous reflection barrier is imposed on the ordinary birth-death processes is considered. An easy-checking criterion for the existence of such Markov processes is first obtained. The uniqueness criterion is then established. In the nonunique case, all the honest processes are explicitly constructed. Ergodicity properties for these processes are investigated. It is proved that honest processes are always ergodic without necessarily imposing any extra conditions. Equilibrium distributions for all these ergodic processes are established. Several examples are provided to illustrate our results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new structure with the special property that instantaneous resurrection and mass disaster are imposed on an ordinary birth-death process is considered. Under the condition that the underlying birth-death process is exit or bilateral, we are able to give easily checked existence criteria for such Markov processes. A very simple uniqueness criterion is also established. All honest processes are explicitly constructed. Ergodicity properties for these processes are investigated. Surprisingly, it can be proved that all the honest processes are not only recurrent but also ergodic without imposing any extra conditions. Equilibrium distributions are then established. Symmetry and reversibility of such processes are also investigated. Several examples are provided to illustrate our results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regime shifts are abrupt changes between contrasting, persistent states of any complex system. The potential for their prediction in the ocean and possible management depends upon the characteristics of the regime shifts: their drivers (from anthropogenic to natural), scale (from the local to the basin) and potential for management action (from adaptation to mitigation). We present a conceptual framework that will enhance our ability to detect, predict and manage regime shifts in the ocean, illustrating our approach with three well-documented examples: the North Pacific, the North Sea and Caribbean coral reefs. We conclude that the ability to adapt to, or manage, regime shifts depends upon their uniqueness, our understanding of their causes and linkages among ecosystem components and our observational capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean color measured from satellites provides daily, global estimates of marine inherent optical properties (IOPs). Semi-analytical algorithms (SAAs) provide one mechanism for inverting the color of the water observed by the satellite into IOPs. While numerous SAAs exist, most are similarly constructed and few are appropriately parameterized for all water masses for all seasons. To initiate community-wide discussion of these limitations, NASA organized two workshops that deconstructed SAAs to identify similarities and uniqueness and to progress toward consensus on a unified SAA. This effort resulted in the development of the generalized IOP (GIOP) model software that allows for the construction of different SAAs at runtime by selection from an assortment of model parameterizations. As such, GIOP permits isolation and evaluation of specific modeling assumptions, construction of SAAs, development of regionally tuned SAAs, and execution of ensemble inversion modeling. Working groups associated with the workshops proposed a preliminary default configuration for GIOP (GIOP-DC), with alternative model parameterizations and features defined for subsequent evaluation. In this paper, we: (1) describe the theoretical basis of GIOP; (2) present GIOP-DC and verify its comparable performance to other popular SAAs using both in situ and synthetic data sets; and, (3) quantify the sensitivities of their output to their parameterization. We use the latter to develop a hierarchical sensitivity of SAAs to various model parameterizations, to identify components of SAAs that merit focus in future research, and to provide material for discussion on algorithm uncertainties and future emsemble applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Explaining the uniqueness of the acquired somatic JAK2 V617F mutation, which is present in more than 95% of polycythemia vera patients, has been a challenge. The V617F mutation in the pseudokinase domain of JAK2 renders the unmutated kinase domain constitutively active. We have performed random mutagenesis at position 617 of JAK2 and tested each of the 20 possible amino acids for ability to induce constitutive signaling in Ba/F3 cells expressing the erythropoietin receptor. Four JAK2 mutants, V617W, V617M, V617I, and V617L, were able to induce cytokine independence and constitutive downstream signaling. Only V617W induced a level of constitutive activation comparable with V617F. Also, only V617W stabilized tyrosine-phosphorylated suppressor of cytokine signaling 3 ( SOCS3), a mechanism by which JAK2 V617F overcomes inhibition by SOCS3. The V617W mutant induced a myeloproliferative disease in mice, mainly characterized by erythrocytosis and megakaryocytic proliferation. Although JAK2 V617W would predictably be pathogenic in humans, the substitution of the Val codon, GTC, by TTG, the codon for Trp, would require three base pair changes, and thus it is unlikely to occur. We discuss how the predicted conformations of the activated JAK2 mutants can lead to better screening assays for novel small molecule inhibitors.