978 resultados para Advanced Transaction Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsidence is a natural hazard that affects wide areas in the world causing important economic costs annually. This phenomenon has occurred in the metropolitan area of Murcia City (SE Spain) as a result of groundwater overexploitation. In this work aquifer system subsidence is investigated using an advanced differential SAR interferometry remote sensing technique (A-DInSAR) called Stable Point Network (SPN). The SPN derived displacement results, mainly the velocity displacement maps and the time series of the displacement, reveal that in the period 2004–2008 the rate of subsidence in Murcia metropolitan area doubled with respect to the previous period from 1995 to 2005. The acceleration of the deformation phenomenon is explained by the drought period started in 2006. The comparison of the temporal evolution of the displacements measured with the extensometers and the SPN technique shows an average absolute error of 3.9±3.8 mm. Finally, results from a finite element model developed to simulate the recorded time history subsidence from known water table height changes compares well with the SPN displacement time series estimations. This result demonstrates the potential of A-DInSAR techniques to validate subsidence prediction models as an alternative to using instrumental ground based techniques for validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of liquid silicon infiltration is investigated for channels with radii from 0.25 to 0.75 [mm] drilled in compact carbon preforms. The advantage of this setup is that the study of the phenomenon results to be simplified. For comparison purposes, attempts are made in order to work out a framework for evaluating the accuracy of simulations. The approach relies on dimensionless numbers involving the properties of the surface reaction. It turns out that complex hydrodynamic behavior derived from second Newton law can be made consistent with Lattice-Boltzmann simulations. The experiments give clear evidence that the growth of silicon carbide proceeds in two different stages and basic mechanisms are highlighted. Lattice-Boltzmann simulations prove to be an effective tool for the description of the growing phase. Namely, essential experimental constraints can be implemented. As a result, the existing models are useful to gain more insight on the process of reactive infiltration into porous media in the first stage of penetration, i.e. up to pore closure because of surface growth. A way allowing to implement the resistance from chemical reaction in Darcy law is also proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper sketches the main features and issues related to recent market developments in global transaction banking (GTB), particularly in trade finance, cash management and correspondent banking. It describes the basic functioning of the GTB, its interaction with global financial markets and related implications of global regulatory developments such as Basel III. The interest in GTB has recently increased, since its low-risk profile, tendency to follow growth rates worldwide and relative independence from other financial instruments became an interesting diversification opportunity both for banks’ business models and for investors. Transaction banking has been a resilient business during the crisis, despite the reduction in world trade figures. In the post crisis period, GTB must cope with new challenges related to increased local and global regulation and the risk of inconsistency in regulatory approaches, which could negatively impact the global network and increased competition by new market entrants. Increased sophistication of corporate clients, as well as the pressure to develop and adopt technological innovations more quickly than other areas of banking continues to impact the business. The future of the industry closely depends on its ability to adjust to complex regulatory developments while at the same time being able to operate a global and efficient network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Researchers evaluating angiomodulating compounds as a part of scientific projects or pre-clinical studies are often confronted with limitations of applied animal models. The rough and insufficient early-stage compound assessment without reliable quantification of the vascular response counts, at least partially, to the low transition rate to clinics. OBJECTIVE To establish an advanced, rapid and cost-effective angiogenesis assay for the precise and sensitive assessment of angiomodulating compounds using zebrafish caudal fin regeneration. It should provide information regarding the angiogenic mechanisms involved and should include qualitative and quantitative data of drug effects in a non-biased and time-efficient way. APPROACH & RESULTS Basic vascular parameters (total regenerated area, vascular projection area, contour length, vessel area density) were extracted from in vivo fluorescence microscopy images using a stereological approach. Skeletonization of the vasculature by our custom-made software Skelios provided additional parameters including "graph energy" and "distance to farthest node". The latter gave important insights into the complexity, connectivity and maturation status of the regenerating vascular network. The employment of a reference point (vascular parameters prior amputation) is unique for the model and crucial for a proper assessment. Additionally, the assay provides exceptional possibilities for correlative microscopy by combining in vivo-imaging and morphological investigation of the area of interest. The 3-way correlative microscopy links the dynamic changes in vivo with their structural substrate at the subcellular level. CONCLUSIONS The improved zebrafish fin regeneration model with advanced quantitative analysis and optional 3-way correlative morphology is a promising in vivo angiogenesis assay, well-suitable for basic research and preclinical investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In relation to motor control, the basal ganglia have been implicated in both the scaling and focusing of movement. Hypokinetic and hyperkinetic movement disorders manifest as a consequence of overshooting and undershooting GPi (globus pallidus internus) activity thresholds, respectively. Recently, models of motor control have been borrowed to translate cognitive processes relating to the overshooting and undershooting of GPi activity, including attention and executive function. Linguistic correlates, however, are yet to be extrapolated in sufficient detail. The aims of the present investigation were to: (1) characterise cognitive-linguistic processes within hypokinetic and hyperkinetic neural systems, as defined by motor disturbances; (2) investigate the impact of surgically-induced GPi lesions upon language abilities. Two Parkinsonian cases with opposing motor symptoms (akinetic versus dystonic/dyskinetic) served as experimental subjects in this research. Assessments were conducted both prior to as well as 3 and 12 months following bilateral posteroventral pallidotomy (PVP). Reliable changes in performance (i.e. both improvements and decrements) were typically restricted to tasks demanding complex linguistic operations across subjects. Hyperkinetic motor symptoms were associated with an initial overall improvement in complex language function as a consequence of bilateral PVP, which diminished over time, suggesting a decrescendo effect relative to surgical beneficence. In contrast, hypokinetic symptoms were associated with a more stable longitudinal linguistic profile, albeit defined by higher proportions of reliable decline versus improvement in postoperative assessment scores. The above findings endorsed the integration of the GPi within cognitive mechanisms involved in the arbitration of complex language functions. In relation to models of motor control, 'focusing' was postulated to represent the neural processes underpinning lexical-semantic manipulation, and 'scaling' the potential allocation of cognitive resources during the mediation of high-level linguistic tasks. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: PI-88 is a mixture of highly sulfated oligosaccharides that inhibits heparanase, an extracellular matrix endoglycosidase, and the binding of angiogenic growth factors to heparan sulfate. This agent showed potent inhibition of placental blood vessel angiogenesis as well as growth inhibition in multiple xenograft models, thus forming the basis for this study. Experimental Design: This study evaluated the toxicity and pharmacokinetics of PI-88 (80-315 mg) when administered s.c. daily for 4 consecutive days bimonthly (part 1) or weekly (part 2). Results: Forty-two patients [median age, 53 years (range, 19-78 years); median performance status, 1] with a range of advanced solid tumors received a total of 232 courses. The maximum tolerated dose was 250 mg/d. Dose-limiting toxicity consisted of thrombocytopenia and pulmonary embolism. Other toxicity was generally mild and included prolongation of the activated partial thromboplastin time and injection site echymosis. The pharmacokinetics were linear with dose. Intrapatient variability was low and interpatient variability was moderate. Both AUC and C-max correlated with the percent increase in activated partial thromboplastin time, showing that this pharmacodynamic end point can be used as a surrogate for drug exposure, No association between PI-88 administration and vascular endothelial growth factor or basic fibroblast growth factor levels was observed. One patient with melanoma had a partial response, which was maintained for >50 months, and 9 patients had stable disease for >= 6 months. Conclusion: The recommended dose of PI-88 administered for 4 consecutive days bimonthly or weekly is 250 mg/d. PI-88 was generally well tolerated. Evidence of efficacy in melanoma supports further evaluation of PI-88 in phase II trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes a detailed study of advanced fibre grating devices using Bragg (FBG) and long-period (LPG) structures and their applications in optical communications and sensing. The major contributions presented in this thesis are summarised below. One of the most important contributions from the research work presented in this thesis is a systematic theoretical study of many distinguishing structures of fibre gratings. Starting from the Maxwell equations, the coupled-mode equations for both FBG and LPG were derived and the mode-overlap factor was analytically discussed. Computing simulation programmes utilising matrix transform method based on the models built upon the coupled-mode equations were developed, enabling simulations of spectral response in terms of reflectivity, bandwidth, sidelobes and dispersion of gratings of different structures including uniform and chirped, phase-shifted, Moiré, sampled Bragg gratings, phase-shifted and cascaded long-period gratings. Although the majority of these structures were modelled numerically, analytical expressions for some complex structures were developed with a clear physical picture. Several apodisation functions were proposed to improve sidelobe suppression, which guided effective production of practical devices for demanding applications. Fibre grating fabrication is the other major part involved in the Ph.D. programme. Both the holographic and scan-phase-mask methods were employed to fabricate Bragg and long-period gratings of standard and novel structures. Significant improvements were particularly made in the scan-phase-mask method to enable the arbitrarily tailoring of the spectral response of grating devices. Two specific techniques - slow-shifting and fast-dithering the phase-mask implemented by a computer controlled piezo - were developed to write high quality phase-shifted, sampled and apodised gratings. A large number of LabVIEW programmes were constructed to implement standard and novel fabrication techniques. In addition, some fundamental studies of grating growth in relating to the UV exposure and hydrogenation induced index were carried out. In particular, Type IIa gratings in non-hydrogenated B/Ge co-doped fibres and a re-generated grating in hydrogenated B/Ge fibre were investigated, showing a significant observation of thermal coefficient reduction. Optical sensing applications utilising fibre grating devices form the third major part of the research work presented in this thesis. Several experiments of novel sensing and sensing-demodulating were implemented. For the first time, an intensity and wavelength dual-coding interrogation technique was demonstrated showing significantly enhanced capacity of grating sensor multiplexing. Based on the mode-splitting measurement, instead of using conventional wavelength-shifting detection technique, successful demonstrations were also made for optical load and bend sensing of ultra-high sensitivity employing LPG structures. In addition, edge-filters and low-loss high-rejection bandpass filters of 50nm stop-band were fabricated for application in optical sensing and high-speed telecommunication systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports the results of research into the connections between transaction attributes and buyer-supplier relationships (BSR) in advanced manufacturing technology (AMT) acquisitions and implementation. It also examines the impact of the different patterns of BSR on performance. Specifically, it addresses the issues of how the three transaction attributes; namely level of complexity, level of asset specificity, and level of uncertainty, can affect the relationships between the technology buyer and suppler in AMT acquisition and implementation, and then to see the impact of different patterns of BSR on the two aspect of performance; namely technology and implementation performance. In understanding the pohenomena, the study mainly draws on and integrates the literature of transaction cost economics theory,buyer-supplier relationships and advanced manufacturing technology as a basis of theoretical framework and hypotheses development.data were gathered through a questionnaire survey with 147 responses and seven semi-structured interviews of manufacturing firms in Malaysia. Quantitative data were analysed mainly using the AMOS (Analysis of Moment Structure) package for structural equation modeling and SPSS (Statistical Package for Social Science) for analysis of variance (ANOVA). Data from interview sessions were used to develop a case study with the intention of providing a richer and deeper understanding on the subject under investigation and to offer triangulation in the research process. he results of the questionnaire survey indicate that the higher the level of technological specificity and uncertainty, the more firms are likely to engage in a closer relationship with technology suppliers.However, the complexity of the technology being implemented is associated with BSR only because it is associated with the level of uncertainty that has direct impact upon BSR.The analysis also provides strong support for the premise that developing strong BSR could lead to an improved performance. However, with high levels of transaction attribute, implementation performance suffers more when firms have weak relationships with technology suppliers than with moderate and low levels of transaction attributes. The implications of the study are offered for both the academic and practitioner audience. The thesis closes with reports on its limitations and suggestions for further research that would address some of these limitations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formative measurement has seen increasing acceptance in organizational research since the turn of the 21st Century. However, in more recent times, a number of criticisms of the formative approach have appeared. Such work argues that formatively-measured constructs are empirically ambiguous and thus flawed in a theory-testing context. The aim of the present paper is to examine the underpinnings of formative measurement theory in light of theories of causality and ontology in measurement in general. In doing so, a thesis is advanced which draws a distinction between reflective, formative, and causal theories of latent variables. This distinction is shown to be advantageous in that it clarifies the ontological status of each type of latent variable, and thus provides advice on appropriate conceptualization and application. The distinction also reconciles in part both recent supportive and critical perspectives on formative measurement. In light of this, advice is given on how most appropriately to model formative composites in theory-testing applications, placing the onus on the researcher to make clear their conceptualization and operationalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a two-dimensional water model investigation and development of a multiscale method for the modelling of large systems, such as virus in water or peptide immersed in the solvent. We have implemented a two-dimensional ‘Mercedes Benz’ (MB) or BN2D water model using Molecular Dynamics. We have studied its dynamical and structural properties dependence on the model’s parameters. For the first time we derived formulas to calculate thermodynamic properties of the MB model in the microcanonical (NVE) ensemble. We also derived equations of motion in the isothermal–isobaric (NPT) ensemble. We have analysed the rotational degree of freedom of the model in both ensembles. We have developed and implemented a self-consistent multiscale method, which is able to communicate micro- and macro- scales. This multiscale method assumes, that matter consists of the two phases. One phase is related to micro- and the other to macroscale. We simulate the macro scale using Landau Lifshitz-Fluctuating Hydrodynamics, while we describe the microscale using Molecular Dynamics. We have demonstrated that the communication between the disparate scales is possible without introduction of fictitious interface or approximations which reduce the accuracy of the information exchange between the scales. We have investigated control parameters, which were introduced to control the contribution of each phases to the matter behaviour. We have shown, that microscales inherit dynamical properties of the macroscales and vice versa, depending on the concentration of each phase. We have shown, that Radial Distribution Function is not altered and velocity autocorrelation functions are gradually transformed, from Molecular Dynamics to Fluctuating Hydrodynamics description, when phase balance is changed. In this work we test our multiscale method for the liquid argon, BN2D and SPC/E water models. For the SPC/E water model we investigate microscale fluctuations which are computed using advanced mapping technique of the small scales to the large scales, which was developed by Voulgarakisand et. al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives Particle delivery to the airways is an attractive prospect for many potential therapeutics, including vaccines. Developing strategies for inhalation of particles provides a targeted, controlled and non-invasive delivery route but, as with all novel therapeutics, in vitro and in vivo testing are needed prior to clinical use. Whilst advanced vaccine testing demands the use of animal models to address safety issues, the production of robust in vitro cellular models would take account of the ethical framework known as the 3Rs (Replacement, Reduction and Refinement of animal use), by permitting initial screening of potential candidates prior to animal use. There is thus a need for relevant, realistic in vitro models of the human airways. Key findings Our laboratory has designed and characterised a multi-cellular model of human airways that takes account of the conditions in the airways and recapitulates many salient features, including the epithelial barrier and mucus secretion. Summary Our human pulmonary models recreate many of the obstacles to successful pulmonary delivery of particles and therefore represent a valid test platform for screening compounds and delivery systems.