890 resultados para real option analysis
Resumo:
Sonoluminescence (SL) involves the conversion of mechanical [ultra]sound energy into light. Whilst the phenomenon is invariably inefficient, typically converting just 10-4 of the incident acoustic energy into photons, it is nonetheless extraordinary, as the resultant energy density of the emergent photons exceeds that of the ultrasonic driving field by a factor of some 10 12. Sonoluminescence has specific [as yet untapped] advantages in that it can be effected at remote locations in an essentially wireless format. The only [usual] requirement is energy transduction via the violent oscillation of microscopic bubbles within the propagating medium. The dependence of sonoluminescent output on the generating sound field's parameters, such as pulse duration, duty cycle, and position within the field, have been observed and measured previously, and several relevant aspects are discussed presently. We also extrapolate the logic from a recently published analysis relating to the ensuing dynamics of bubble 'clouds' that have been stimulated by ultrasound. Here, the intention was to develop a relevant [yet computationally simplistic] model that captured the essential physical qualities expected from real sonoluminescent microbubble clouds. We focused on the inferred temporal characteristics of SL light output from a population of such bubbles, subjected to intermediate [0.5-2MPa] ultrasonic pressures. Finally, whilst direct applications for sonoluminescent light output are thought unlikely in the main, we proceed to frame the state-of-the- art against several presently existing technologies that could form adjunct approaches with distinct potential for enhancing present sonoluminescent light output that may prove useful in real world [biomedical] applications.
Resumo:
Models of complex systems with n components typically have order n<sup>2</sup> parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species’ trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.
Resumo:
We present a mathematically rigorous Quality-of-Service (QoS) metric which relates the achievable quality of service metric (QoS) for a real-time analytics service to the server energy cost of offering the service. Using a new iso-QoS evaluation methodology, we scale server resources to meet QoS targets and directly rank the servers in terms of their energy-efficiency and by extension cost of ownership. Our metric and method are platform-independent and enable fair comparison of datacenter compute servers with significant architectural diversity, including micro-servers. We deploy our metric and methodology to compare three servers running financial option pricing workloads on real-life market data. We find that server ranking is sensitive to data inputs and desired QoS level and that although scale-out micro-servers can be up to two times more energy-efficient than conventional heavyweight servers for the same target QoS, they are still six times less energy efficient than high-performance computational accelerators.
Resumo:
In order to use virtual reality as a sport analysis tool, we need to be sure that an immersed athlete reacts realistically in a virtual environment. This has been validated for a real handball goalkeeper facing a virtual thrower. However, we currently ignore which visual variables induce a realistic motor behavior of the immersed handball goalkeeper. In this study, we used virtual reality to dissociate the visual information related to the movements of the player from the visual information related to the trajectory of the ball. Thus, the aim is to evaluate the relative influence of these different visual information sources on the goalkeeper's motor behavior. We tested 10 handball goalkeepers who had to predict the final position of the virtual ball in the goal when facing the following: only the throwing action of the attacking player (TA condition), only the resulting ball trajectory (BA condition), and both the throwing action of the attacking player and the resulting ball trajectory (TB condition). Here we show that performance was better in the BA and TB conditions, but contrary to expectations, performance was substantially worse in the TA condition. A significant effect of ball landing zone does, however, suggest that the relative importance between visual information from the player and the ball depends on the targeted zone in the goal. In some cases, body-based cues embedded in the throwing actions may have a minor influence on the ball trajectory and vice versa. Kinematics analysis was then combined with these results to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
A novel model-based principal component analysis (PCA) method is proposed in this paper for wide-area power system monitoring, aiming to tackle one of the critical drawbacks of the conventional PCA, i.e. the incapability to handle non-Gaussian distributed variables. It is a significant extension of the original PCA method which has already shown to outperform traditional methods like rate-of-change-of-frequency (ROCOF). The ROCOF method is quick for processing local information, but its threshold is difficult to determine and nuisance tripping may easily occur. The proposed model-based PCA method uses a radial basis function neural network (RBFNN) model to handle the nonlinearity in the data set to solve the no-Gaussian issue, before the PCA method is used for islanding detection. To build an effective RBFNN model, this paper first uses a fast input selection method to remove insignificant neural inputs. Next, a heuristic optimization technique namely Teaching-Learning-Based-Optimization (TLBO) is adopted to tune the nonlinear parameters in the RBF neurons to build the optimized model. The novel RBFNN based PCA monitoring scheme is then employed for wide-area monitoring using the residuals between the model outputs and the real PMU measurements. Experimental results confirm the efficiency and effectiveness of the proposed method in monitoring a suite of process variables with different distribution characteristics, showing that the proposed RBFNN PCA method is a reliable scheme as an effective extension to the linear PCA method.
Resumo:
In this paper, our previous work on Principal Component Analysis (PCA) based fault detection method is extended to the dynamic monitoring and detection of loss-of-main in power systems using wide-area synchrophasor measurements. In the previous work, a static PCA model was built and verified to be capable of detecting and extracting system faulty events; however the false alarm rate is high. To address this problem, this paper uses a well-known ‘time lag shift’ method to include dynamic behavior of the PCA model based on the synchronized measurements from Phasor Measurement Units (PMU), which is named as the Dynamic Principal Component Analysis (DPCA). Compared with the static PCA approach as well as the traditional passive mechanisms of loss-of-main detection, the proposed DPCA procedure describes how the synchrophasors are linearly
auto- and cross-correlated, based on conducting the singular value decomposition on the augmented time lagged synchrophasor matrix. Similar to the static PCA method, two statistics, namely T2 and Q with confidence limits are calculated to form intuitive charts for engineers or operators to monitor the loss-of-main situation in real time. The effectiveness of the proposed methodology is evaluated on the loss-of-main monitoring of a real system, where the historic data are recorded from PMUs installed in several locations in the UK/Ireland power system.
Resumo:
This paper presents the numerical simulation of the ultimate behaviour of 85 one-way and two-way spanning laterally restrained concrete slabs of variable thickness, span, reinforcement ratio, strength and boundary conditions reported in literature by different authors. The developed numerical model was described and all the assumptions were illustrated. ABAQUS, a Finite Element Analysis suite of software, was employed. Non-linear implicit static general analysis method offered by ABAQUS was used. Other analysis methods were also discussed in general in terms of application such as Explicit Dynamic Analysis and Riks method. The aim is to demonstrate the ability and efficacy of FEA to simulate the ultimate load behaviour of slabs considering different material properties and boundary conditions. The authors intended to present a numerical model that provides consistent predictions of the ultimate behaviour of laterally restrained slabs that could be used as an alternative for expensive real life testing as well as for the design and assessment of new and existing structures respectively. The enhanced strength of laterally-restrained slabs compared with conventional design methods predictions is believed to be due to compressive membrane action (CMA). CMA is an inherent phenomenon of laterally restrained concrete beams/slabs. The numerical predictions obtained from the developed model were in good correlation with the experimental results and with those obtained from the CMA method developed at the Queen’s University Belfast, UK.
Resumo:
The battle to mitigate Android malware has become more critical with the emergence of new strains incorporating increasingly sophisticated evasion techniques, in turn necessitating more advanced detection capabilities. Hence, in this paper we propose and evaluate a machine learning based approach based on eigenspace analysis for Android malware detection using features derived from static analysis characterization of Android applications. Empirical evaluation with a dataset of real malware and benign samples show that detection rate of over 96% with a very low false positive rate is achievable using the proposed method.
Resumo:
We pursue a comparative analysis of employers’ age management practices in Britain and Germany, asking how valid ‘convergence’ and ‘Varieties of Capitalism’ theories are. After rejecting the convergence verdict, we proceed to ask how far ‘path dependence’ helps explain inter-country differences. Through 19 interviews with British and German experts, we find that firms have reacted in different ways to promptings from the EU and the two states. Change has been modest and a rhetoric-reality gap exists in firms as they seek to hedge. We point to continuities in German institutional methods of developing new initiatives, and the emerging role of British NGOs in helping firms and the state develop new options. We argue that ‘path dependence’ offers insight into the national comparison, but also advance the idea of national modes of firm optionexploration as an important way of conceptualizing the processes involved.
Resumo:
Recent analyses of sediment samples from "black mat" sites in South America and Europe support previous interpretations of an ET impact event that reversed the Late Glacial demise of LGM ice during the Bølling Allerød warming, resulting in a resurgence of ice termed the Younger Dryas (YD) cooling episode. The breakup or impact of a cosmic vehicle at the YD boundary coincides with the onset of a 1-kyr long interval of glacial resurgence, one of the most studied events of the Late Pleistocene. New analytical databases reveal a corpus of data indicating that the cosmic impact was a real event, most possibly a cosmic airburst from Earth's encounter with the Taurid Complex comet or unknown asteroid, an event that led to cosmic fragments exploding interhemispherically over widely dispersed areas, including the northern Andes of Venezuela and the Alps on the Italian/French frontier. While the databases in the two areas differ somewhat, the overall interpretation is that microtextural evidence in weathering rinds and in sands of associated paleosols and glaciofluvial deposits carry undeniable attributes of melted glassy carbon and Fe spherules, planar deformation features, shock-melted and contorted quartz, occasional transition and platinum metals, and brecciated and impacted minerals of diverse lithologies. In concert with other black mat localities in the Western USA, the Netherlands, coastal France, Syria, Central Asia, Peru, Argentina and Mexico, it appears that a widespread cosmic impact by an asteroid or comet is responsible for deposition of the black mat at the onset of the YD glacial event. Whether or not the impact caused a 1-kyr interval of glacial climate depends upon whether or not the Earth had multiple centuries-long episodic encounters with the Taurid Complex or asteroid remnants; impact-related changes in microclimates sustained climatic forcing sufficient to maintain positive mass balances in the reformed ice; and/or inertia in the Atlantic thermohaline circulation system persisted for 1kyr.
Resumo:
Quantile normalization (QN) is a technique for microarray data processing and is the default normalization method in the Robust Multi-array Average (RMA) procedure, which was primarily designed for analysing gene expression data from Affymetrix arrays. Given the abundance of Affymetrix microarrays and the popularity of the RMA method, it is crucially important that the normalization procedure is applied appropriately. In this study we carried out simulation experiments and also analysed real microarray data to investigate the suitability of RMA when it is applied to dataset with different groups of biological samples. From our experiments, we showed that RMA with QN does not preserve the biological signal included in each group, but rather it would mix the signals between the groups. We also showed that the Median Polish method in the summarization step of RMA has similar mixing effect. RMA is one of the most widely used methods in microarray data processing and has been applied to a vast volume of data in biomedical research. The problematic behaviour of this method suggests that previous studies employing RMA could have been misadvised or adversely affected. Therefore we think it is crucially important that the research community recognizes the issue and starts to address it. The two core elements of the RMA method, quantile normalization and Median Polish, both have the undesirable effects of mixing biological signals between different sample groups, which can be detrimental to drawing valid biological conclusions and to any subsequent analyses. Based on the evidence presented here and that in the literature, we recommend exercising caution when using RMA as a method of processing microarray gene expression data, particularly in situations where there are likely to be unknown subgroups of samples.
Resumo:
Aims: This study aimed to gain insight into patient’s perceptions of natural tooth loss and explored their experiences of oral rehabilitation according to a functionally orientated approach (SDA) and Removable Partial Dentures (RPD).
Study Design: For this qualitative study, a purposive sample of 15 partially dentate older patients
were recruited from Cork Dental School and Hospital. These patients had previously participated in a randomised controlled clinical trial (RCT) where they were provided with either SDA treatment using adhesive bridgework or provided with Cobalt Chromium framework RPDs. In- depth interviews were undertaken and thematic analysis was utilised to interpret the data.
Results: The findings of this study indicated strong satisfaction with SDA treatment. Patients referred to the ease in which they adapted to the adhesive prostheses as they were “lightweight”, “neat” and “fixed”. Irrespective of treatment option, patients indicated that they felt
their new prostheses were durable and an improvement on previous treatments. Most patients indicated that, previous to the RCT, they had not attended a general dentist for a number of years and only then for acute issues. They had concerns that treatment which was provided to them as part of the RCT would not be available to them in primary care. Interestingly, although they do not want their condition to dis-improve, if their prostheses failed they stated that they would not seek alternative treatment but would revert back
to adopting previous coping mechanisms.
Conclusion: This study illustrates that partially dentate older patients were very satisfied with oral rehabilitation according to a functionally orientated approach. Unfortunately they did not believe that this treatment would currently be made available to them in a primary care setting.
Resumo:
Social media channels, such as Facebook or Twitter, allow for people to express their views and opinions about any public topics. Public sentiment related to future events, such as demonstrations or parades, indicate public attitude and therefore may be applied while trying to estimate the level of disruption and disorder during such events. Consequently, sentiment analysis of social media content may be of interest for different organisations, especially in security and law enforcement sectors. This paper presents a new lexicon-based sentiment analysis algorithm that has been designed with the main focus on real time Twitter content analysis. The algorithm consists of two key components, namely sentiment normalisation and evidence-based combination function, which have been used in order to estimate the intensity of the sentiment rather than positive/negative label and to support the mixed sentiment classification process. Finally, we illustrate a case study examining the relation between negative sentiment of twitter posts related to English Defence League and the level of disorder during the organisation’s related events.
Resumo:
Persistent organic pollutants (POPs) are toxic substances, highly resistant to environmental degradation, which can bio-accumulate and have long-range atmospheric transport potential. Most studies focus on single compound effects, however as humans are exposed to several POPs simultaneously, investigating exposure effects of real life POP mixtures on human health is necessary. A defined mixture of POPs was used, where the compound concentration reflected its contribution to the levels seen in Scandinavian human serum (total mix). Several sub mixtures representing different classes of POP were also constructed. The perfluorinated (PFC) mixture contained six perfluorinated compounds, brominated (Br) mixture contained seven brominated compounds, chlorinated (Cl) mixture contained polychlorinated biphenyls and also p,p'-dichlorodiphenyldichloroethylene, hexachlorobenzene, three chlordanes, three hexachlorocyclohexanes and dieldrin. Human hepatocarcinoma (HepG2) cells were used for 2h and 48h exposures to the seven mixtures and analysis on a CellInsight™ NXT High Content Screening platform. Multiple cytotoxic endpoints were investigated: cell number, nuclear intensity and area, mitochondrial mass and membrane potential (MMP) and reactive oxygen species (ROS). Both the Br and Cl mixtures induced ROS production but did not lead to apoptosis. The PFC mixture induced the ROS production and likely induced cell apoptosis accompanied by the dissipation of MMP. Synergistic effects were evident for ROS induction when cells were exposed to the PFC+Br mixture. No significant effects were detected in the Br+Cl, PFC+Cl or total mixtures, which contain the same concentrations of chlorinated compounds as the Cl mixture plus additional compounds; highlighting the need for further exploration of POP mixtures in risk assessment.