944 resultados para refinement calculus


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weak references provide the programmer with limited control over the process of memory management. By using them, a programmer can make decisions based on previous actions that are taken by the garbage collector. Although this is often helpful, the outcome of a program using weak references is less predictable due to the nondeterminism they introduce in program evaluation. It is therefore desirable to have a framework of formal tools to reason about weak references and programs that use them. We present several calculi that formalize various aspects of weak references, inspired by their implementation in Java. We provide a calculus to model multiple levels of non-strong references, where a different garbage collection policy is applied to each level. We consider different collection policies such as eager collection and lazy collection. Similar to the way they are implemented in Java, we give the semantics of eager collection to weak references and the semantics of lazy collection to soft references. Moreover, we condition garbage collection on the availability of time and space resources. While time constraints are used in order to restrict garbage collection, space constraints are used in order to trigger it. Finalizers are a problematic feature in Java, especially when they interact with weak references. We provide a calculus to model finalizer evaluation. Since finalizers have little meaning in a language without side-effect, we introduce a limited form of side effect into the calculus. We discuss determinism and the separate notion of uniqueness of (evaluation) outcome. We show that in our calculus, finalizer evaluation does not affect uniqueness of outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The healthcare industry is beginning to appreciate the benefits which can be obtained from using Mobile Health Systems (MHS) at the point-of-care. As a result, healthcare organisations are investing heavily in mobile health initiatives with the expectation that users will employ the system to enhance performance. Despite widespread endorsement and support for the implementation of MHS, empirical evidence surrounding the benefits of MHS remains to be fully established. For MHS to be truly valuable, it is argued that the technological tool be infused within healthcare practitioners work practices and used to its full potential in post-adoptive scenarios. Yet, there is a paucity of research focusing on the infusion of MHS by healthcare practitioners. In order to address this gap in the literature, the objective of this study is to explore the determinants and outcomes of MHS infusion by healthcare practitioners. This research study adopts a post-positivist theory building approach to MHS infusion. Existing literature is utilised to develop a conceptual model by which the research objective is explored. Employing a mixed-method approach, this conceptual model is first advanced through a case study in the UK whereby propositions established from the literature are refined into testable hypotheses. The final phase of this research study involves the collection of empirical data from a Canadian hospital which supports the refined model and its associated hypotheses. The results from both phases of data collection are employed to develop a model of MHS infusion. The study contributes to IS theory and practice by: (1) developing a model with six determinants (Availability, MHS Self-Efficacy, Time-Criticality, Habit, Technology Trust, and Task Behaviour) and individual performance-related outcomes of MHS infusion (Effectiveness, Efficiency, and Learning), (2) examining undocumented determinants and relationships, (3) identifying prerequisite conditions that both healthcare practitioners and organisations can employ to assist with MHS infusion, (4) developing a taxonomy that provides conceptual refinement of IT infusion, and (5) informing healthcare organisations and vendors as to the performance of MHS in post-adoptive scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern neuroscience relies heavily on sophisticated tools that allow us to visualize and manipulate cells with precise spatial and temporal control. Transgenic mouse models, for example, can be used to manipulate cellular activity in order to draw conclusions about the molecular events responsible for the development, maintenance and refinement of healthy and/or diseased neuronal circuits. Although it is fairly well established that circuits respond to activity-dependent competition between neurons, we have yet to understand either the mechanisms underlying these events or the higher-order plasticity that synchronizes entire circuits. In this thesis we aimed to develop and characterize transgenic mouse models that can be used to directly address these outstanding biological questions in different ways. We present SLICK-H, a Cre-expressing mouse line that can achieve drug-inducible, widespread, neuron-specific manipulations in vivo. This model is a clear improvement over existing models because of its particularly strong, widespread, and even distribution pattern that can be tightly controlled in the absence of drug induction. We also present SLICK-V::Ptox, a mouse line that, through expression of the tetanus toxin light chain, allows long-term inhibition of neurotransmission in a small subset (<1%) of fluorescently labeled pyramidal cells. This model, which can be used to study how a silenced cell performs in a wildtype environment, greatly facilitates the in vivo study of activity-dependent competition in the mammalian brain. As an initial application we used this model to show that tetanus toxin-expressing CA1 neurons experience a 15% - 19% decrease in apical dendritic spine density. Finally, we also describe the attempt to create additional Cre-driven mouse lines that would allow conditional alteration of neuronal activity either by hyperpolarization or inhibition of neurotransmission. Overall, the models characterized in this thesis expand upon the wealth of tools available that aim to dissect neuronal circuitry by genetically manipulating neurons in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Colloidal photonic crystals have potential light manipulation applications including; fabrication of efficient lasers and LEDs, improved optical sensors and interconnects, and improving photovoltaic efficiencies. One road-block of colloidal selfassembly is their inherent defects; however, they can be manufactured cost effectively into large area films compared to micro-fabrication methods. This thesis investigates production of ‘large-area’ colloidal photonic crystals by sonication, under oil co-crystallization and controlled evaporation, with a view to reducing cracking and other defects. A simple monotonic Stöber particle synthesis method was developed producing silica particles in the range of 80 to 600nm in a single step. An analytical method assesses the quality of surface particle ordering in a semiquantitative manner was developed. Using fast Fourier transform (FFT) spot intensities, a grey scale symmetry area method, has been used to quantify the FFT profiles. Adding ultrasonic vibrations during film formation demonstrated large areas could be assembled rapidly, however film ordering suffered as a result. Under oil cocrystallisation results in the particles being bound together during film formation. While having potential to form large areas, it requires further refinement to be established as a production technique. Achieving high quality photonic crystals bonded with low concentrations (<5%) of polymeric adhesives while maintaining refractive index contrast, proved difficult and degraded the film’s uniformity. A controlled evaporation method, using a mixed solvent suspension, represents the most promising method to produce high quality films over large areas, 75mm x 25mm. During this mixed solvent approach, the film is kept in the wet state longer, thus reducing cracks developing during the drying stage. These films are crack-free up to a critical thickness, and show very large domains, which are visible in low magnification SEM images as Moiré fringe patterns. Higher magnification reveals separation between alternate fringe patterns are domain boundaries between individual crystalline growth fronts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The Early Development Instrument (EDI) is a population-level measure of five developmental domains at school-entry age. The overall aim of this thesis was to explore the potential of the EDI as an indicator of early development in Ireland. Methods: A cross-sectional study was conducted in 47 primary schools in 2011 using the EDI and a linked parental questionnaire. EDI (teacher completed) scores were calculated for 1,344 children in their first year of full-time education. Those scoring in the lowest 10% of the sample population in one or more domains were deemed to be 'developmentally vulnerable'. Scores were correlated with contextual data from the parental questionnaire and with indicators of area and school-level deprivation. Rasch analysis was used to determine the validity of the EDI. Results: Over one quarter (27.5%) of all children in the study were developmentally vulnerable. Individual characteristics associated with increased risk of vulnerability were being male; under 5 years old; and having English as a second language. Adjusted for these demographics, low birth weight, poor parent/child interaction and mother’s lower level of education showed the most significant odds ratios for developmental vulnerability. Vulnerability did not follow the area-level deprivation gradient as measured by a composite index of material deprivation. Children considered by the teacher to be in need of assessment also had lower scores, which were not significantly different from those of children with a clinical diagnosis of special needs. all domains showed at least reasonable fit to the Rasch model supporting the validity of the instrument. However, there was a need for further refinement of the instrument in the Irish context. Conclusion: This thesis provides a unique snapshot of early development in Ireland. The EDI and linked parental questionnaires are promising indicators of the extent, distribution and determinants of developmental vulnerability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To define the biology driving the aggressive nature of breast cancer arising in young women. EXPERIMENTAL DESIGN: Among 784 patients with early stage breast cancer, using prospectively-defined, age-specific cohorts (young or=65 years), 411 eligible patients (n = 200or=65 years) with clinically-annotated Affymetrix microarray data were identified. GSEA, signatures of oncogenic pathway deregulation and predictors of chemotherapy sensitivity were evaluated within the two age-defined cohorts. RESULTS: In comparing deregulation of oncogenic pathways between age groups, a higher probability of PI3K (p = 0.006) and Myc (p = 0.03) pathway deregulation was observed in breast tumors arising in younger women. When evaluating unique patterns of pathway deregulation, a low probability of Src and E2F deregulation in tumors of younger women, concurrent with a higher probability of PI3K, Myc, and beta-catenin, conferred a worse prognosis (HR = 4.15). In contrast, a higher probability of Src and E2F pathway activation in tumors of older women, with concurrent low probability of PI3K, Myc and beta-catenin deregulation, was associated with poorer outcome (HR = 2.7). In multivariate analyses, genomic clusters of pathway deregulation illustrate prognostic value. CONCLUSION: Results demonstrate that breast cancer arising in young women represents a distinct biologic entity characterized by unique patterns of deregulated signaling pathways that are prognostic, independent of currently available clinico-pathologic variables. These results should enable refinement of targeted treatment strategies in this clinically challenging situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During cortical synaptic development, thalamic axons must establish synaptic connections despite the presence of the more abundant intracortical projections. How thalamocortical synapses are formed and maintained in this competitive environment is unknown. Here, we show that astrocyte-secreted protein hevin is required for normal thalamocortical synaptic connectivity in the mouse cortex. Absence of hevin results in a profound, long-lasting reduction in thalamocortical synapses accompanied by a transient increase in intracortical excitatory connections. Three-dimensional reconstructions of cortical neurons from serial section electron microscopy (ssEM) revealed that, during early postnatal development, dendritic spines often receive multiple excitatory inputs. Immuno-EM and confocal analyses revealed that majority of the spines with multiple excitatory contacts (SMECs) receive simultaneous thalamic and cortical inputs. Proportion of SMECs diminishes as the brain develops, but SMECs remain abundant in Hevin-null mice. These findings reveal that, through secretion of hevin, astrocytes control an important developmental synaptic refinement process at dendritic spines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transcranial magnetic stimulation (TMS) is a widely used, noninvasive method for stimulating nervous tissue, yet its mechanisms of effect are poorly understood. Here we report new methods for studying the influence of TMS on single neurons in the brain of alert non-human primates. We designed a TMS coil that focuses its effect near the tip of a recording electrode and recording electronics that enable direct acquisition of neuronal signals at the site of peak stimulus strength minimally perturbed by stimulation artifact in awake monkeys (Macaca mulatta). We recorded action potentials within ∼1 ms after 0.4-ms TMS pulses and observed changes in activity that differed significantly for active stimulation as compared with sham stimulation. This methodology is compatible with standard equipment in primate laboratories, allowing easy implementation. Application of these tools will facilitate the refinement of next generation TMS devices, experiments and treatment protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

info:eu-repo/semantics/published

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel multi-scale seamless model of brittle-crack propagation is proposed and applied to the simulation of fracture growth in a two-dimensional Ag plate with macroscopic dimensions. The model represents the crack propagation at the macroscopic scale as the drift-diffusion motion of the crack tip alone. The diffusive motion is associated with the crack-tip coordinates in the position space, and reflects the oscillations observed in the crack velocity following its critical value. The model couples the crack dynamics at the macroscales and nanoscales via an intermediate mesoscale continuum. The finite-element method is employed to make the transition from the macroscale to the nanoscale by computing the continuum-based displacements of the atoms at the boundary of an atomic lattice embedded within the plate and surrounding the tip. Molecular dynamics (MD) simulation then drives the crack tip forward, producing the tip critical velocity and its diffusion constant. These are then used in the Ito stochastic calculus to make the reverse transition from the nanoscale back to the macroscale. The MD-level modelling is based on the use of a many-body potential. The model successfully reproduces the crack-velocity oscillations, roughening transitions of the crack surfaces, as well as the macroscopic crack trajectory. The implications for a 3-D modelling are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel multiscale model of brittle crack propagation in an Ag plate with macroscopic dimensions has been developed. The model represents crack propagation as stochastic drift-diffusion motion of the crack tip atom through the material, and couples the dynamics across three different length scales. It integrates the nanomechanics of bond rupture at the crack tip with the displacement and stress field equations of continuum based fracture theories. The finite element method is employed to obtain the continuum based displacement and stress fields over the macroscopic plate, and these are then used to drive the crack tip forward at the atomic level using the molecular dynamics simulation method based on many-body interatomic potentials. The linkage from the nanoscopic scale back to the macroscopic scale is established via the Ito stochastic calculus, the stochastic differential equation of which advances the tip to a new position on the macroscopic scale using the crack velocity and diffusion constant obtained on the nanoscale. Well known crack characteristics, such as the roughening transitions of the crack surfaces, crack velocity oscillations, as well as the macroscopic crack trajectories, are obtained.