962 resultados para Parallel track model
Resumo:
Objectives: The goal of the present study was to elucidate the contribution of the newly recognized virulence factor choline to the pathogenesis of Streptococcus pneumoniae in an animal model of meningitis. Results: The choline containing strain D39Cho(-) and its isogenic choline-free derivative D39Cho(-)licA64 -each expressing the capsule polysaccharide 2 - were introduced intracisternally at an inoculum size of 10(3) CFU into 11 days old Wistar rats. During the first 8 h post infection both strains multiplied and stimulated a similar immune response that involved expression of high levels of proinflammatory cytokines, the matrix metalloproteinase 9 (MMP-9), IL-10, and the influx of white blood cells into the CSF. Virtually identical immune response was also elicited by intracisternal inoculation of 10(7) CFU equivalents of either choline-containing or choline-free cell walls. At sampling times past 8 h strain D39Cho(-) continued to replicate accompanied by an intense inflammatory response and strong granulocytic pleiocytosis. Animals infected with D39Cho(-) died within 20 h and histopathology revealed brain damage in the cerebral cortex and hippocampus. In contrast, the initial immune response generated by the choline-free strain D39Cho(-)licA64 began to decline after the first 8 h accompanied by elimination of the bacteria from the CSF in parallel with a strong WBC response peaking at 8 h after infection. All animals survived and there was no evidence for brain damage. Conclusion: Choline in the cell wall is essential for pneumococci to remain highly virulent and survive within the host and establish pneumococcal meningitis.
Resumo:
Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.
Resumo:
The three-step test is central to the regulation of copyright limitations at the international level. Delineating the room for exemptions with abstract criteria, the three-step test is by far the most important and comprehensive basis for the introduction of national use privileges. It is an essential, flexible element in the international limitation infrastructure that allows national law makers to satisfy domestic social, cultural, and economic needs. Given the universal field of application that follows from the test’s open-ended wording, the provision creates much more breathing space than the more specific exceptions recognized in international copyright law. EC copyright legislation, however, fails to take advantage of the flexibility inherent in the three-step test. Instead of using the international provision as a means to open up the closed EC catalogue of permissible exceptions, offer sufficient breathing space for social, cultural, and economic needs, and enable EC copyright law to keep pace with the rapid development of the Internet, the Copyright Directive 2001/29/EC encourages the application of the three-step test to further restrict statutory exceptions that are often defined narrowly in national legislation anyway. In the current online environment, however, enhanced flexibility in the field of copyright limitations is indispensable. From a social and cultural perspective, the web 2.0 promotes and enhances freedom of expression and information with its advanced search engine services, interactive platforms, and various forms of user-generated content. From an economic perspective, it creates a parallel universe of traditional content providers relying on copyright protection, and emerging Internet industries whose further development depends on robust copyright limita- tions. In particular, the newcomers in the online market – social networking sites, video forums, and virtual worlds – promise a remarkable potential for economic growth that has already attracted the attention of the OECD. Against this background, the time is ripe to debate the introduction of an EC fair use doctrine on the basis of the three-step test. Otherwise, EC copyright law is likely to frustrate important opportunities for cultural, social, and economic development. To lay groundwork for the debate, the differences between the continental European and the Anglo-American approach to copyright limitations (section 1), and the specific merits of these two distinct approaches (section 2), will be discussed first. An analysis of current problems that have arisen under the present dysfunctional EC system (section 3) will then serve as a starting point for proposing an EC fair use doctrine based on the three-step test (section 4). Drawing conclusions, the international dimension of this fair use proposal will be considered (section 5).
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
The Earth's bow shock is very efficient in accelerating ions out of the incident solar wind distribution to high energies (≈ 200 keV/e). Fluxes of energetic ions accelerated at the quasi-parallel bow shock, also known as diffuse ions, are best represented by exponential spectra in energy/charge, which require additional assumptions to be incorporated into these model spectra. One of these assumptions is a so-called "free escape boundary" along the interplanetary magnetic field into the upstream direction. Locations along the IBEX orbit are ideally suited for in situ measurements to investigate the existence of an upstream free escape boundary for bow shock accelerated ions. In this study we use 2 years of ion measurements from the background monitor on the IBEX spacecraft, supported by ACE solar wind observations. The IBEX Background Monitor is sensitive to protons > 14 keV, which includes the energy of the maximum flux for diffuse ions. With increasing distance from the bow shock along the interplanetary magnetic field, the count rates for diffuse ions stay constant for ions streaming away from the bow shock, while count rates for diffuse ions streaming toward the shock gradually decrease from a maximum value to ~1/e at distances of about 10 RE to 14 RE. These observations of a gradual decrease support the transition to a free escape continuum for ions of energy >14 keV at distances from 10 RE to 14 RE from the bow shock.
Resumo:
A detailed microdosimetric characterization of the M. D. Anderson 42 MeV (p,Be) fast neutron beam was performed using the techniques of microdosimetry and a 1/2 inch diameter Rossi proportional counter. These measurements were performed at 5, 15, and 30 cm depths on the central axis, 3 cm inside, and 3 cm outside the field edge for 10 $\times$ 10 and 20 $\times$ 20 cm field sizes. Spectra were also measured at 5 and 15 cm depth on central axis for a 6 $\times$ 6 cm field size. Continuous slowing down approximation calculations were performed to model the nuclear processes that occur in the fast neutron beam. Irradiation of the CR-39 was performed using a tandem electrostatic accelerator for protons of 10, 6, and 3 MeV and alpha particles of 15, 10, and 7 MeV incident energy on target at angles of incidence from 0 to 85 degrees. The critical angle as well as track etch rate and normal incidence diameter versus linear energy transfer (LET) were obtained from these measurements. The bulk etch rate was also calculated from these measurements. Dose response of the material was studied, and the angular distribution of charged particles created by the fast neutron beam was measured with CR-39. The efficiency of CR-39 was calculated versus that of the Rossi chamber, and an algorithm was devised for derivation of LET spectra from the major and minor axis dimensions of the observed tracks. The CR-39 was irradiated in the same positions as the Rossi chamber, and the derived spectra were compared directly. ^
Resumo:
A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in p p collisions at root s = 7 TeV using 4.7 fb(-1) of data collected with the ATLAS experiment at the LHC. In these models, the lightest chargino is predicted to have a lifetime long enough to be detected in the tracking detectors of collider experiments. This analysis explores such models by searching for chargino decays that result in tracks with few associated hits in the outer region of the tracking system. The transverse-momentum spectrum of candidate tracks is found to be consistent with the expectation from the Standard Model background processes and constraints on chargino properties are obtained.
Resumo:
A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb(-1) of root s = 7 TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results.
Resumo:
Many extensions of the Standard Model posit the existence of heavy particles with long lifetimes. In this Letter, results are presented of a search for events containing one or more such particles, which decay at a significant distance from their production point, using a final state containing charged hadrons and an associated muon. This analysis uses a data sample of proton-proton collisions at root s = 7 TeV corresponding to an integrated luminosity of 4.4 fb(-1) collected in 2011 by the ATLAS detector operating at the Large Hadron Collider. Results are interpreted in the context of R-parity violating supersymmetric scenarios. No events in the signal region are observed and limits are set on the production cross section for pair production of supersymmetric particles, multiplied by the square of the branching fraction for a neutralino to decay to charged hadrons and a muon, based on the scenario where both of the produced supersymmetric particles give rise to neutralinos that decay in this way. However, since the search strategy is based on triggering on and reconstructing the decay products of individual long-lived particles, irrespective of the rest of the event, these limits can easily be reinterpreted in scenarios with different numbers of long-lived particles per event. The limits are presented as a function of neutralino lifetime, and for a range of squark and neutralino masses.
Resumo:
BACKGROUND: We developed a canine model of acute atopic dermatitis to evaluate the potential of compounds to treat pruritus and skin lesions induced in Dermatophagoides farinae (Df)-sensitized dogs. HYPOTHESIS/OBJECTIVES: The aim was to investigate the effectiveness of long-term recording activity monitors to assess pruritus induced by allergen challenges. ANIMALS: Thirty-two Df-sensitized laboratory dogs. METHODS: In two blinded crossover studies, 28 Df-sensitized dogs were challenged on 3 days with a Df slurry applied to clipped abdominal skin. Dogs were treated with a positive control (prednisolone 1 mg/kg once daily for 5 days, starting 1 day before challenge) or left untreated; all were fitted with activity monitors. To confirm pruritus, a parallel study with four dogs was conducted, filming the dogs before and during challenge and assessing the film for pruritic behaviour. RESULTS: The activity of dogs treated with prednisolone was significantly lower between 00.00 and 03.00 h and between 03.00 and 06.00 h compared with untreated dogs (repeated-measures ANCOVA; P < 0.0001). To determine whether the recorded night-time activity corresponded to pruritic manifestations, we compared activity monitor and video recordings of four dogs for two periods (16.30-20.30 and 24.00-03.00 h) before and during a Df challenge. The correlation between night-time activity monitor activity and observed pruritic behaviour was highly significant (test of correlation coefficient versus zero: r = 0.57, P < 0.0001). CONCLUSIONS AND CLINICAL IMPORTANCE: Determination of night-time activity with activity monitors after allergen challenge appears to be an objective and practical way to assess pruritus in this experimental model of canine atopic dermatitis.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
Passive positioning systems produce user location information for third-party providers of positioning services. Since the tracked wireless devices do not participate in the positioning process, passive positioning can only rely on simple, measurable radio signal parameters, such as timing or power information. In this work, we provide a passive tracking system for WiFi signals with an enhanced particle filter using fine-grained power-based ranging. Our proposed particle filter provides an improved likelihood function on observation parameters and is equipped with a modified coordinated turn model to address the challenges in a passive positioning system. The anchor nodes for WiFi signal sniffing and target positioning use software defined radio techniques to extract channel state information to mitigate multipath effects. By combining the enhanced particle filter and a set of enhanced ranging methods, our system can track mobile targets with an accuracy of 1.5m for 50% and 2.3m for 90% in a complex indoor environment. Our proposed particle filter significantly outperforms the typical bootstrap particle filter, extended Kalman filter and trilateration algorithms.
Resumo:
OBJECTIVE: Mechanical evaluation of a novel screw position used for repair in a type III distal phalanx fracture model and assessment of solar canal penetration (SCP). STUDY DESIGN: Experimental study. SAMPLE POPULATION: Disarticulated equine hooves (n = 24) and 24 isolated distal phalanges. METHODS: Hooves/distal phalanges cut in a sagittal plane were repaired with 1 of 2 different cortical screw placements in lag fashion. In group 1 (conventional screw placement), the screw was inserted halfway between the proximal border of the solar canal (SC) and the subchondral bone surface on a line parallel to the dorsal cortex, whereas in group 2, the screw was inserted more palmar/plantar, where a perpendicular line drawn from the group 1 position reached the palmar/plantar cortex. Construct strength was evaluated by 3-point bending to failure. SCP was assessed by CT imaging and macroscopically. RESULTS: Screws were significantly longer in group 2 and in forelimbs. Group 2 isolated distal phalanges had a significantly more rigid fixation compared with the conventional screw position (maximum point at failure 31%, bending stiffness 41% higher). Lumen reduction of the SC was observed in 13/52 specimens (all from group 2), of which 9 were forelimbs. CONCLUSIONS: More distal screw positioning compared with the conventionally recommended screw position for internal fixation of type III distal phalangeal fractures allows placement of a longer screw and renders a more rigid fracture fixation. The novel screw position, however, carries a higher risk of SCP
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Sediment spectral reflectance measurements were generated aboard the JOIDES Resolution during Ocean Drilling Program Leg 162 shipboard operations. The large size of the raw data set (over 1.3 gigabytes) and limited computer hard disk storage space precluded detailed analysis of the data at sea, although broad band averages were used as aids in developing splices and determining lithologic boundaries. This data report describes the methods used to collect these data and their shipboard and postcruise processing. These initial results provide the basis for further postcruise research.