988 resultados para Capponi, Niccolò, 1473-1529.
Resumo:
SUMMARY: Fracture stabilization in the diabetic patient is associated with higher complication rates, particularly infection and impaired wound healing, which can lead to major tissue damage, osteomyelitis, and higher amputation rates. With an increasing prevalence of diabetes and an aging population, the risks of infection of internal fixation devices are expected to grow. Although numerous retrospective clinical studies have identified a relationship between diabetes and infection, currently there are few animal models that have been used to investigate postoperative surgical-site infections associated with internal fixator implantation and diabetes. The authors therefore refined the protocol for inducing hyperglycemia and compared the bacterial burden in controls to pharmacologically induced type 1 diabetic rats after undergoing internal fracture plate fixation and Staphylococcus aureus surgical-site inoculation. Using an initial series of streptozotocin doses, followed by optional additional doses to reach a target blood glucose range of 300 to 600 mg/dl, the authors reliably induced diabetes in 100 percent of the rats (n = 16), in which a narrow hyperglycemic range was maintained 14 days after onset of diabetes (mean ± SEM, 466 ± 16 mg/dl; coefficient of variation, 0.15). With respect to their primary endpoint, the authors quantified a significantly higher infectious burden in inoculated diabetic animals (median, 3.2 × 10 colony-forming units/mg dry tissue) compared with inoculated nondiabetic animals (7.2 × 10 colony-forming units/mg dry tissue). These data support the authors' hypothesis that uncontrolled diabetes adversely affects the immune system's ability to clear Staphylococcus aureus associated with internal hardware.
Resumo:
BACKGROUND: Some of the 600,000 patients with solid organ allotransplants need reconstruction with a composite tissue allotransplant, such as the hand, abdominal wall, or face. The aim of this study was to develop a rat model for assessing the effects of a secondary composite tissue allotransplant on a primary heart allotransplant. METHODS: Hearts of Wistar Kyoto rats were harvested and transplanted heterotopically to the neck of recipient Fisher 344 rats. The anastomoses were performed between the donor brachiocephalic artery and the recipient left common carotid artery, and between the donor pulmonary artery and the recipient external jugular vein. Recipients received cyclosporine A for 10 days only. Heart rate was assessed noninvasively. The sequential composite tissue allotransplant consisted of a 3 x 3-cm abdominal musculocutaneous flap harvested from Lewis rats and transplanted to the abdomen of the heart allotransplant recipients. The abdominal flap vessels were connected to the femoral vessels. No further immunosuppression was administered following the composite tissue allotransplant. Ten days after composite tissue allotransplantation, rejection of the heart and abdominal flap was assessed histologically. RESULTS: The rat survival rate of the two-stage transplant surgery was 80 percent. The transplanted heart rate decreased from 150 +/- 22 beats per minute immediately after transplant to 83 +/- 12 beats per minute on day 20 (10 days after stopping immunosuppression). CONCLUSIONS: This sequential allotransplant model is technically demanding. It will facilitate investigation of the effects of a secondary composite tissue allotransplant following primary solid organ transplantation and could be useful in developing future immunotherapeutic strategies.
Resumo:
UNLABELLED: Response inhibition is a key component of executive control, but its relation to other cognitive processes is not well understood. We recently documented the "inhibition-induced forgetting effect": no-go cues are remembered more poorly than go cues. We attributed this effect to central-resource competition, whereby response inhibition saps attention away from memory encoding. However, this proposal is difficult to test with behavioral means alone. We therefore used fMRI in humans to test two neural predictions of the "common resource hypothesis": (1) brain regions associated with response inhibition should exhibit greater resource demands during encoding of subsequently forgotten than remembered no-go cues; and (2) this higher inhibitory resource demand should lead to memory encoding regions having less resources available during encoding of subsequently forgotten no-go cues. Participants categorized face stimuli by gender in a go/no-go task and, following a delay, performed a surprise recognition memory test for those faces. Replicating previous findings, memory was worse for no-go than for go stimuli. Crucially, forgetting of no-go cues was predicted by high inhibitory resource demand, as quantified by the trial-by-trial ratio of activity in neural "no-go" versus "go" networks. Moreover, this index of inhibitory demand exhibited an inverse trial-by-trial relationship with activity in brain regions responsible for the encoding of no-go cues into memory, notably the ventrolateral prefrontal cortex. This seesaw pattern between the neural resource demand of response inhibition and activity related to memory encoding directly supports the hypothesis that response inhibition temporarily saps attentional resources away from stimulus processing. SIGNIFICANCE STATEMENT: Recent behavioral experiments showed that inhibiting a motor response to a stimulus (a "no-go cue") impairs subsequent memory for that cue. Here, we used fMRI to test whether this "inhibition-induced forgetting effect" is caused by competition for neural resources between the processes of response inhibition and memory encoding. We found that trial-by-trial variations in neural inhibitory resource demand predicted subsequent forgetting of no-go cues and that higher inhibitory demand was furthermore associated with lower concurrent activation in brain regions responsible for successful memory encoding of no-go cues. Thus, motor inhibition and stimulus encoding appear to compete with each other: when more resources have to be devoted to inhibiting action, less are available for encoding sensory stimuli.
Resumo:
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.
Resumo:
BACKGROUND: Lumbar disc herniation has a prevalence of up to 58% in the athletic population. Lumbar discectomy is a common surgical procedure to alleviate pain and disability in athletes. We systematically reviewed the current clinical evidence regarding athlete return to sport (RTS) following lumbar discectomy compared to conservative treatment. METHODS: A computer-assisted literature search of MEDLINE, CINAHL, Web of Science, PEDro, OVID and PubMed databases (from inception to August 2015) was utilised using keywords related to lumbar disc herniation and surgery. The design of this systematic review was developed using the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Methodological quality of individual studies was assessed using the Downs and Black scale (0-16 points). RESULTS: The search strategy revealed 14 articles. Downs and Black quality scores were generally low with no articles in this review earning a high-quality rating, only 5 articles earning a moderate quality rating and 9 of the 14 articles earning a low-quality rating. The pooled RTS for surgical intervention of all included studies was 81% (95% CI 76% to 86%) with significant heterogeneity (I(2)=63.4%, p<0.001) although pooled estimates report only 59% RTS at same level. Pooled analysis showed no difference in RTS rate between surgical (84% (95% CI 77% to 90%)) and conservative intervention (76% (95% CI 56% to 92%); p=0.33). CONCLUSIONS: Studies comparing surgical versus conservative treatment found no significant difference between groups regarding RTS. Not all athletes that RTS return at the level of participation they performed at prior to surgery. Owing to the heterogeneity and low methodological quality of included studies, rates of RTS cannot be accurately determined.
Resumo:
Book review of: Chance Encounters: A First Course in Data Analysis and Inference by Christopher J. Wild and George A.F. Seber 2000, John Wiley & Sons Inc. Hard-bound, xviii + 612 pp ISBN 0-471-32936-3
Resumo:
As announced in the November 2000 issue of MathStats&OR [1], one of the projects supported by the Maths, Stats & OR Network funds is an international survey of research into pedagogic issues in statistics and OR. I am taking the lead on this and report here on the progress that has been made during the first year. A paper giving some background to the project and describing initial thinking on how it might be implemented was presented at the 53rd session of the International Statistical Institute in Seoul, Korea, in August 2001 in a session on The future of statistics education research [2]. It sounded easy. I considered that I was something of an expert on surveys having lectured on the topic for many years and having helped students and others who were doing surveys, particularly with the design of their questionnaires. Surely all I had to do was to draft a few questions, send them electronically to colleagues in statistical education who would be only to happy to respond, and summarise their responses? I should have learnt from my experience of advising all those students who thought that doing a survey was easy and to whom I had to explain that their ideas were too ambitious. There are several inter-related stages in survey research and it is important to think about these before rushing into the collection of data. In the case of the survey in question, this planning stage revealed several challenges. Surveys are usually done for a purpose so even before planning how to do them, it is advisable to think about the final product and the dissemination of results. This is the route I followed.
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.
Resumo:
The importance of patterns in constructing complex systems has long been recognised in other disciplines. In software engineering, for example, well-crafted object-oriented architectures contain several design patterns. Focusing on mechanisms of constructing software during system development can yield an architecture that is simpler, clearer and more understandable than if design patterns were ignored or not properly applied. In this paper, we propose a model that uses object-oriented design patterns to develop a core bitemporal conceptual model. We define three core design patterns that form a core bitemporal conceptual model of a typical bitemporal object. Our framework is known as the Bitemporal Object, State and Event Modelling Approach (BOSEMA) and the resulting core model is known as a Bitemporal Object, State and Event (BOSE) model. Using this approach, we demonstrate that we can enrich data modelling by using well known design patterns which can help designers to build complex models of bitemporal databases.
Resumo:
This paper addresses some controversial issues relating to two main questions. Firstly, we discuss 'man-in-the loop' issues in SAACS. Some people advocate this must always be so that man's decisions can override autonomic components. In this case, the system has two subsystems - man and machine. Can we, however, have a fully autonomic machine - with no man in sight; even for short periods of time? What kinds of systems require man to always be in the loop? What is the optimum balance in self-to-human control? How do we determine the optimum? How far can we go in describing self-behaviour? How does a SAACS system handle unexpected behaviour? Secondly, what are the challenges/obstacles in testing SAACS in the context of self/human dilemma? Are there any lesson to be learned from other programmes e.g. Star-wars, aviation and space explorations? What role human factors and behavioural models play whilst in interacting with SAACS?.
Resumo:
The emergent behaviour of autonomic systems, together with the scale of their deployment, impedes prediction of the full range of configuration and failure scenarios; thus it is not possible to devise management and recovery strategies to cover all possible outcomes. One solution to this problem is to embed self-managing and self-healing abilities into such applications. Traditional design approaches favour determinism, even when unnecessary. This can lead to conflicts between the non-functional requirements. Natural systems such as ant colonies have evolved cooperative, finely tuned emergent behaviours which allow the colonies to function at very large scale and to be very robust, although non-deterministic. Simple pheromone-exchange communication systems are highly efficient and are a major contribution to their success. This paper proposes that we look to natural systems for inspiration when designing architecture and communications strategies, and presents an election algorithm which encapsulates non-deterministic behaviour to achieve high scalability, robustness and stability.
Resumo:
The high-intensity, high-resolution x-ray source at the European Synchrotron Radiation Facility (ESRF) has been used in x-ray diffraction (XRD) experiments to detect intermetallic compounds (IMCs) in lead-free solder bumps. The IMCs found in 95.5Sn3.8Ag0.7Cu solder bumps on Cu pads with electroplated-nickel immersion-gold (ENIG) surface finish are consistent with results based on traditional destructive methods. Moreover, after positive identification of the IMCs from the diffraction data, spatial distribution plots over the entire bump were obtained. These spatial distributions for selected intermetallic phases display the layer thickness and confirm the locations of the IMCs. For isothermally aged solder samples, results have shown that much thicker layers of IMCs have grown from the pad interface into the bulk of the solder. Additionally, the XRD technique has also been used in a temperature-resolved mode to observe the formation of IMCs, in situ, during the solidification of the solder joint. The results demonstrate that the XRD technique is very attractive as it allows for nondestructive investigations to be performed on expensive state-of-the-art electronic components, thereby allowing new, lead-free materials to be fully characterized.
Resumo:
The needs for various forms of information systems relating to the European environment and ecosystem are reviewed, and limitations indicated. Existing information systems are reviewed and compared in terms of aims and functionalities. We consider TWO technical challenges involved in attempting to develop an IEEICS. First, there is the challenge of developing an Internet-based communication system which allows fluent access to information stored in a range of distributed databases. Some of the currently available solutions are considered, i.e. Web service federations. The second main challenge arises from the fact that there is general intra-national heterogeneity in the definitions adopted, and the measurement systems used throughout the nations of Europe. Integrated strategies are needed.
Resumo:
This work proceeds from the assumption that a European environmental information and communication system (EEICS) is already established. In the context of primary users (land-use planners, conservationists, and environmental researchers) we ask what use may be made of the EEICS for building models and tools which is of use in building decision support systems for the land-use planner. The complex task facing the next generation of environmental and forest modellers is described, and a range of relevant modelling approaches are reviewed. These include visualization and GIS; statistical tabulation and database SQL, MDA and OLAP methods. The major problem of noncomparability of the definitions and measures of forest area and timber volume is introduced and the possibility of a model-based solution is considered. The possibility of using an ambitious and challenging biogeochemical modelling approach to understanding and managing European forests sustainably is discussed. It is emphasised that all modern methodological disciplines must be brought to bear, and a heuristic hybrid modelling approach should be used so as to ensure that the benefits of practical empirical modelling approaches are utilised in addition to the scientifically well-founded and holistic ecosystem and environmental modelling. The data and information system required is likely to end up as a grid-based-framework because of the heavy use of computationally intensive model-based facilities.