936 resultados para Thread safe parallel run-time
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
Purpose: to evaluate changes in tear metrics and ocular signs induced by six months of silicone-hydrogel contact lens wear and the difference in baseline characteristics between those who successfully continued in contact lens wear compared to those that did not. Methods: Non-invasive Keratograph, Tearscope and fluorescein tear break-up times (TBUTs), tear meniscus height, bulbar and limbal hyperaemia, lid-parallel conjunctival folds (LIPCOF), phenol red thread, fluorescein and lissamine-green staining, and lid wiper epitheliopathy were measured on 60 new contact lens wearers fitted with monthly silicone-hydrogels (average age 36 ± 14 years, 40 females). Symptoms were evaluated by the Ocular Surface Disease Index (OSDI). After six months full time contact lens wear the above metrics were re-measured on those patients still in contact lens wear (n= 33). The initial measurements were also compared between the group still wearing lenses after six months and those who had ceased lens wear (n= 27). Results: There were significant changes in tear meniscus height (p= 0.031), bulbar hyperaemia (p= 0.011), fluorescein TBUT (p= 0.027), corneal (p= 0.007) and conjunctival (p= 0.009) staining, LIPCOF (p= 0.011) and lid wiper epitheliopathy (p= 0.002) after six months of silicone-hydrogel wear. Successful wearers had a higher non-invasive (17.0 ± 8.2. s vs 12.0 ± 5.6. s; p= 0.001) and fluorescein (10.7 ± 6.4. s vs 7.5 ± 4.7. s; p= 0.001) TBUT than drop-outs, although OSDI (cut-off 4.2) was also a strong predictor of success. Conclusion: Silicone-hydrogel lenses induced significant changes in the tear film and ocular surface as well as lid margin staining. Wettability of the ocular surface is the main factor affecting contact lens drop-out. © 2013 British Contact Lens Association.
Resumo:
The dynamics of the non-equilibrium Ising model with parallel updates is investigated using a generalized mean field approximation that incorporates multiple two-site correlations at any two time steps, which can be obtained recursively. The proposed method shows significant improvement in predicting local system properties compared to other mean field approximation techniques, particularly in systems with symmetric interactions. Results are also evaluated against those obtained from Monte Carlo simulations. The method is also employed to obtain parameter values for the kinetic inverse Ising modeling problem, where couplings and local field values of a fully connected spin system are inferred from data. © 2014 IOP Publishing Ltd and SISSA Medialab srl.
Resumo:
Presented is a study on a single-drive dual-parallel Mach-Zehnder modulator implementation as a single sideband suppressed carrier generator. High values of both extinction ratio and sidemode suppression ratio were obtained at different modulation frequencies over the Cband. In addition, a stabilisation loop had been developed to preserve the single sideband generation over time. © The Institution of Engineering and Technology 2013.
Resumo:
If humans monitor streams of rapidly presented (approximately 100-ms intervals) visual stimuli, which are typically specific single letters of the alphabet, for two targets (T1 and T2), they often miss T2 if it follows T1 within an interval of 200-500 ms. If T2 follows T1 directly (within 100 ms; described as occurring at 'Lag 1'), however, performance is often excellent: the so-called 'Lag-1 sparing' phenomenon. Lag-1 sparing might result from the integration of the two targets into the same 'event representation', which fits with the observation that sparing is often accompanied by a loss of T1-T2 order information. Alternatively, this might point to competition between the two targets (implying a trade-off between performance on T1 and T2) and Lag-1 sparing might solely emerge from conditional data analysis (i.e. T2 performance given T1 correct). We investigated the neural correlates of Lag-1 sparing by carrying out magnetoencephalography (MEG) recordings during an attentional blink (AB) task, by presenting two targets with a temporal lag of either 1 or 2 and, in the case of Lag 2, with a nontarget or a blank intervening between T1 and T2. In contrast to Lag 2, where two distinct neural responses were observed, at Lag 1 the two targets produced one common neural response in the left temporo-parieto-frontal (TPF) area but not in the right TPF or prefrontal areas. We discuss the implications of this result with respect to competition and integration hypotheses, and with respect to the different functional roles of the cortical areas considered. We suggest that more than one target can be identified in parallel in left TPF, at least in the absence of intervening nontarget information (i.e. masks), yet identified targets are processed and consolidated as two separate events by other cortical areas (right TPF and PFC, respectively).
Resumo:
Purpose – The purpose of this paper is to investigate an underexplored aspect of outsourcing involving a mixed strategy in which parallel production is continued in-house at the same time as outsourcing occurs. Design/methodology/approach – The study applied a multiple case study approach and drew on qualitative data collected through in-depth interviews with wood product manufacturing companies. Findings – The paper posits that there should be a variety of mixed strategies between the two governance forms of “make” or “buy.” In order to address how companies should consider the extent to which they outsource, the analysis was structured around two ends of a continuum: in-house dominance or outsourcing dominance. With an in-house-dominant strategy, outsourcing complements an organization's own production to optimize capacity utilization and outsource less cost-efficient production, or is used as a tool to learn how to outsource. With an outsourcing-dominant strategy, in-house production helps maintain complementary competencies and avoids lock-in risk. Research limitations/implications – This paper takes initial steps toward an exploration of different mixed strategies. Additional research is required to understand the costs of different mixed strategies compared with insourcing and outsourcing, and to study parallel production from a supplier viewpoint. Practical implications – This paper suggests that managers should think twice before rushing to a “me too” outsourcing strategy in which in-house capacities are completely closed. It is important to take a dynamic view of outsourcing that maintains a mixed strategy as an option, particularly in situations that involve an underdeveloped supplier market and/or as a way to develop resources over the long term. Originality/value – The concept of combining both “make” and “buy” is not new. However, little if any research has focussed explicitly on exploring the variety of different types of mixed strategies that exist on the continuum between insourcing and outsourcing.
Resumo:
An application of the heterogeneous variables system prediction method to solving the time series analysis problem with respect to the sample size is considered in this work. It is created a logical-and-probabilistic correlation from the logical decision function class. Two ways is considered. When the information about event is kept safe in the process, and when it is kept safe in depending process.
Resumo:
In the field of Transition P systems implementation, it has been determined that it is very important to determine in advance how long takes evolution rules application in membranes. Moreover, to have time estimations of rules application in membranes makes possible to take important decisions related to hardware / software architectures design. The work presented here introduces an algorithm for applying active evolution rules in Transition P systems, which is based on active rules elimination. The algorithm complies the requisites of being nondeterministic, massively parallel, and what is more important, it is time delimited because it is only dependant on the number of membrane evolution rules.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, its distributed and massively parallel nature obtain polynomial solutions opposite to traditional non-polynomial ones. Nowadays, developed investigation for implementing membrane systems has not yet reached the massively parallel character of this computational model. Better published approaches have achieved a distributed architecture denominated “partially parallel evolution with partially parallel communication” where several membranes are allocated at each processor, proxys are used to communicate with membranes allocated at different processors and a policy of access control to the communications is mandatory. With these approaches, it is obtained processors parallelism in the application of evolution rules and in the internal communication among membranes allocated inside each processor. Even though, external communications share a common communication line, needed for the communication among membranes arranged in different processors, are sequential. In this work, we present a new hierarchical architecture that reaches external communication parallelism among processors and substantially increases parallelization in the application of evolution rules and internal communications. Consequently, necessary time for each evolution step is reduced. With all of that, this new distributed hierarchical architecture is near to the massively parallel character required by the model.
Resumo:
This paper is focused on a parallel JAVA implementation of a processor defined in a Network of Evolutionary Processors. Processor description is based on JDom, which provides a complete, Java-based solution for accessing, manipulating, and outputting XML data from Java code. Communication among different processor to obtain a fully functional simulation of a Network of Evolutionary Processors will be treated in future. A safe-thread model of processors performs all parallel operations such as rules and filters. A non-deterministic behavior of processors is achieved with a thread for each rule and for each filter (input and output). Different results of a processor evolution are shown.
Resumo:
This work was partially supported by the Bulgarian National Science Fund under Contract No MM 1405. Part of the results were announced at the Fifth International Workshop on Optimal Codes and Related Topics (OCRT), White Lagoon, June 2007, Bulgaria
Resumo:
For the first time, fully functional human mesenchymal stem cells (hMSCs) have been cultured at the litre-scale on microcarriers in a stirred-tank 5 l bioreactor, (2.5 l working volume) and were harvested via a potentially scalable detachment protocol that allowed for the successful detachment of hMSCs from the cell-microcarrier suspension. Over 12 days, the dissolved O2 concentration was >45 % of saturation and the pH between 7.2 and 6.7 giving a maximum cell density in the 5 l bioreactor of 1.7 × 105 cells/ml; this represents >sixfold expansion of the hMSCs, equivalent to that achievable from 65 fully-confluent T-175 flasks. During this time, the average specific O2 uptake of the cells in the 5 l bioreactor was 8.1 fmol/cell h and, in all cases, the 5 l bioreactors outperformed the equivalent 100 ml spinner-flasks run in parallel with respect to cell yields and growth rates. In addition, yield coefficients, specific growth rates and doubling times were calculated for all systems. Neither the upstream nor downstream bioprocessing unit operations had a discernible effect on cell quality with the harvested cells retaining their immunophenotypic markers, key morphological features and differentiation capacity. © 2013 Springer Science+Business Media Dordrecht.
Resumo:
This paper describes the basic tools for a real-time decision support system of a semiotic type on the example of the prototype for management and monitoring of a nuclear power block implemented on the basis of the tool complex G2+GDA using cognitive graphics and parallel processing. This work was supported by RFBR (project 02-07-90042).
Resumo:
Dry eye disease is a common clinical condition whose aetiology and management challenges clinicians and researchers alike. Practitioners have a number of dry eye tests available to clinically assess dry eye disease, in order to treat their patients effectively and successfully. This thesis set out to determine the most relevant and successful key tests for dry eye disease diagnosis/ management. There has been very little research on determining the most effective treatment options for these patients; therefore a randomised controlled study was conducted in order to see how different artificial treatments perform compared to each other, whether the preferred treatment could have been predicted from their ocular clinical assessment, and if the preferred treatment subjectively related to the greatest improvement in ocular physiology and tear film stability. This research has found: 1. From the plethora of ocular the tear tests available to utilise in clinical practice, the tear stability tests as measured by the non-invasive tear break (NITBUT) up time and invasive tear break up time (NaFL TBUT) are strongly correlated. The tear volume tests are also related as measured by the phenol red thread (PRT) and tear meniscus height (TMH). Lid Parallel Conjunctival Folds (LIPCOF) and conjunctival staining are significantly correlated to one another. Symptomology and osmolarity were also found to be important tests in order to assess for dry eye. 2. Artificial tear supplements do work for ocular comfort, as well as the ocular surface as observed by conjunctival staining and the reduction LIPCOF. There is no strong evidence of one type of artificial tear supplement being more effective than others, and the data suggest that these improvements are more due to the time than the specific drops. 3. When trying to predict patient preference for artificial tears from baseline measurements, the individual category of artificial tear supplements appeared to have an improvement in at least 1 tear metric. Undoubtedly, from the study the patients preferred artificial tear supplements’ were rated much higher than the other three drops used in the study and their subjective responses were statistically significant than the signs. 4. Patients are also willing to pay for a community dry eye service in their area of £17. In conclusion, the dry eye tests conducted in the study correlate with one another and with the symptoms reported by the patient. Artificial tears do make a difference objectively as well as subjectively. There is no optimum artificial treatment for dry eye, however regular consistent use of artificial eye drops will improve the ocular surface.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.