981 resultados para augmented Lagrangian methods
Resumo:
Aim: To characterize the inhibition of platelet function by paracetamol in vivo and in vitro, and to evaluate the possible interaction of paracetamol and diclofenac or valdecoxib in vivo. To assess the analgesic effect of the drugs in an experimental pain model. Methods: Healthy volunteers received increasing doses of intravenous paracetamol (15, 22.5 and 30 mg/kg), or the combination of paracetamol 1 g and diclofenac 1.1 mg/kg or valdecoxib 40 mg (as the pro-drug parecoxib). Inhibition of platelet function was assessed with photometric aggregometry, the platelet function analyzer (PFA-100), and release of thromboxane B2. Analgesia was assessed with the cold pressor test. The inhibition coefficient of platelet aggregation by paracetamol was determined as well as the nature of interaction between paracetamol and diclofenac by an isobolographic analysis in vitro. Results: Paracetamol inhibited platelet aggregation and TxB2-release dose-dependently in volunteers and concentration-dependently in vitro. The inhibition coefficient was 15.2 mg/L (95% CI 11.8 - 18.6). Paracetamol augmented the platelet inhibition by diclofenac in vivo, and the isobole showed that this interaction is synergistic. Paracetamol showed no interaction with valdecoxib. PFA-100 appeared insensitive in detecting platelet dysfunction by paracetamol, and the cold-pressor test showed no analgesia. Conclusions: Paracetamol inhibits platelet function in vivo and shows synergism when combined with diclofenac. This effect may increase the risk of bleeding in surgical patients with an impaired haemostatic system. The combination of paracetamol and valdecoxib may be useful in patients with low risk for thromboembolism. The PFA-100 seems unsuitable for detection of platelet dysfunction and the cold-pressor test seems unsuitable for detection of analgesia by paracetamol.
Resumo:
Past studies that have compared LBB stable discontinuous- and continuous-pressure finite element formulations on a variety of problems have concluded that both methods yield Solutions of comparable accuracy, and that the choice of interpolation is dictated by which of the two is more efficient. In this work, we show that using discontinuous-pressure interpolations can yield inaccurate solutions at large times on a class of transient problems, while the continuous-pressure formulation yields solutions that are in good agreement with the analytical Solution.
Resumo:
Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.
Resumo:
A lack of suitable venous graft material or poor outflow is an increasingly encountered situation in peripheral vascular surgery. Prosthetic grafts have clearly worse patency than vein grafts in femorodistal bypass surgery. The use of an adjuvant arteriovenous fistula (av-fistula) at the distal anastomosis has been postulated to improve the flow and thus increase prosthetic graft patency. In theory the adjuvant fistula might have the same effect in a compromised outflow venous bypass. A free flap transfer also augments graft flow and may have a positive effect on an ischaemic limb. The aim of this study was to evaluate the possible benefit of an adjuvant av-fistula and an internal av-fistula within a free flap transfer on the patency and outcome of an infrapopliteal bypass. The effect of the av-fistula on bypass haemodynamics was also assessed along with possible adverse effects. Patients and methods: 1. A prospective randomised multicentre trial comprised 59 patients with critical leg ischaemia and no suitable veins for grafting. Femorocrural polytetrafluoroethylene (PTFE) bypasses with a distal vein cuff, with or without an adjuvant av-fistula, were performed. The outcome was assessed according to graft patency and leg salvage. 2. Haemodynamic measurements were performed to a total of 50 patients from Study I with a prolonged follow-up. 3. Nine critically ischaemic limbs were treated with a modified radial forearm flap transfer in combination with a femorodistal bypass operation. An internal av-fistula was created within the free flap transfer to increase flap artery and bypass graft flow. 4. The effect of a previous free flap transfer on bypass haemodynamics was studied in a case report. 5. In a retrospective multicentre case-control study, 77 infrapopliteal vein bypasses with an adjuvant av-fistula were compared with matched controls without a fistula. The outcome and haemodynamics of the bypasses were recorded. Main results: 1. The groups with and without the av-fistula did not differ as regards prosthetic graft patency or leg salvage. 2. The intra- and postoperative prosthetic graft flow was significantly increased in the patients with the av-fistula. However, this increase did not improve patency. There was no difference in patency between the groups, even in the extended follow-up. 3. The vein graft flow increased significantly after the anastomosis of the radial forearm flap with an internal av-fistula. 4. A previously performed free flap transfer significantly augmented the flow of a poor outflow femoropedal bypass graft. 5. The adjuvant av-fistula increased the venous infrapopliteal bypass flow significantly. The increased flow did not, however, lead to improved graft patency or leg salvage. Conclusion: An adjuvant av-fistula does not improve the patency of a femorocrural PTFE bypass with a distal vein cuff despite the fact that the flow values increased both in the intraoperative measurements and during the immediate postoperative surveillance. The adjuvant av-fistula increased graft flow significantly also in a poor outflow venous bypass, but regardless of this the outcome was no improved. The adjuvant av-fistula rarely caused adverse effects. In a group of diabetic patients, the flow in a vascular bypass graft was augmented by an internal av-fistula within a radial forearm flap and similarly in a patient with a previous free flap transfer, a high intraoperative graft flow was achieved due to the free flap shunt effect.
Resumo:
Background. Kidney transplantation (KTX) is considered to be the best treatment of terminal uremia. Despite improvements in short-term graft survival, a considerable number of kidney allografts are lost due to the premature death of patients with a functional kidney and to chronic allograft nephropathy (CAN). Aim. To investigate the risk factors involved in the progression of CAN and to analyze diagnostic methods for this entity. Materials and methods. Altogether, 153 implant and 364 protocol biopsies obtained between June 1996 and April 2008 were analyzed. The biopsies were classified according to Banff ’97 and chronic allograft damage index (CADI). Immunohistochemistry for TGF-β1 was performed in 49 biopsies. Kidney function was evaluated by creatinine and/or cystatin C measurement and by various estimates of glomerular filtration rate (GFR). Demographic data of the donors and recipients were recorded after 2 years’ follow-up. Results. Most of the 3-month biopsies (73%) were nearly normal. The mean CADI score in the 6-month biopsies decreased significantly after 2001. Diastolic hypertension correlated with ΔCADI. Serum creatinine concentration at hospital discharge and glomerulosclerosis were risk factors for ΔCADI. High total and LDL cholesterol, low HDL and hypertension correlated with chronic histological changes. The mean age of the donors increased from 41 -52 years. Older donors were more often women who had died from an underlying disease. The prevalence of delayed graft function increased over the years, while acute rejections (AR) decreased significantly over the years. Sub-clinical AR was observed in 4% and it did not affect long-term allograft function or CADI. Recipients´ drug treatment was modified along the Studies, being mycophenolate mophetil, tacrolimus, statins and blockers of the renine-angiotensin-system more frequently prescribed after 2001. Patients with a higher ΔCADI had lower GFR during follow-up. CADI over 2 was best predicted by creatinine, although with modest sensitivity and specificity. Neither cystatin C nor other estimates of GFR were superior to creatinine for CADI prediction. Cyclosporine A toxicity was seldom seen. Low cyclosporin A concentration after 2 h correlated with TGF- β1 expression in interstitial inflammatory cells, and this predicted worse graft function. Conclusions. The progression of CAN has been affected by two major factors: the donors’ characteristics and the recipients’ hypertension. The increased prevalence of DGF might be a consequence of the acceptance of older donors who had died from an underlying disease. Implant biopsies proved to be of prognostic value, and they are essential for comparison with subsequent biopsies. The progression of histological damage was associated with hypertension and dyslipidemia. The augmented expression of TGF-β1 in inflammatory cells is unclear, but it may be related to low immunosuppression. Serum creatinine is the most suitable tool for monitoring kidney allograft function on every-day basis. However, protocol biopsies at 6 and 12 months predicted late kidney allograft dysfunction and affected the clinical management of the patients. Protocol biopsies are thus a suitable surrogate to be used in clinical trials and for monitoring kidney allografts.
Resumo:
Idiopathic pulmonary fibrosis (IPF) is an interstitial lung disease with unknown aetiology and poor prognosis. IPF is characterized by alveolar epithelial damage that leads tissue remodelling and ultimately to the loss of normal lung architecture and function. Treatment has been focused on anti-inflammatory therapies, but due to their poor efficacy new therapeutic modalities are being sought. There is a need for early diagnosis and also for differential diagnostic markers for IPF and other interstitial lung diseases. The study utilized patient material obtained from bronchoalveolar lavage (BAL), diagnostic biopsies or lung transplantation. Human pulmonary fibroblast cell cultures were propagated and asbestos-induced pulmonary fibrosis in mice was used as an experimental animal model of IPF. The possible markers for IPF were scanned by immunohistochemistry, RT-PCR, ELISA and western blot. Matrix metalloproteinases (MMPs) are proteolytic enzymes that participate in tissue remodelling. Microarray studies have introduced potential markers that could serve as additional tools for the assessment of IPF and one of the most promising was MMP 7. MMP-7 protein levels were measured in the BAL fluid of patients with idiopathic interstitial lung diseases or idiopathic cough. MMP-7 was however similarly elevated in the BAL fluid of all these disorders and thus cannot be used as a differential diagnostic marker for IPF. Activation of transforming growth factor (TGF)-ß is considered to be a key element in the progression of IPF. Bone morphogenetic proteins (BMP) are negative regulators of intracellular TGF-ß signalling and BMP-4 signalling is in turn negatively regulated by gremlin. Gremlin was found to be highly upregulated in the IPF lungs and IPF fibroblasts. Gremlin was detected in the thickened IPF parenchyma and endothelium of small capillaries, whereas in non-specific interstitial pneumonia it localized predominantly in the alveolar epithelium. Parenchymal gremlin immunoreactivity might indicate IPF-type interstitial pneumonia. Gremlin mRNA levels were higher in patients with end-stage fibrosis suggesting that gremlin might be a marker for more advanced disease. Characterization of the fibroblastic foci in the IPF lungs showed that immunoreactivity to platelet-derived growth factor (PDGF) receptor-α and PDGF receptor-β was elevated in IPF parenchyma, but the fibroblastic foci showed only minor immunoreactivity to the PDGF receptors or the antioxidant peroxiredoxin II. Ki67 positive cells were also observed predominantly outside the fibroblastic foci, suggesting that the fibroblastic foci may not be composed of actively proliferating cells. When inhibition of profibrotic PDGF-signalling by imatinib mesylate was assessed, imatinib mesylate reduced asbestos-induced pulmonary fibrosis in mice as well as human pulmonary fibroblast migration in vitro but it had no effect on the lung inflammation.
Resumo:
Objectives In China, “serious road traffic crashes” (SRTCs) are those in which there are 10-30 fatalities, 50-100 serious injuries or a total cost of 50-100 million RMB ($US8-16m), and “particularly serious road traffic crashes” (PSRTCs) are those which are more severe or costly. Due to the large number of fatalities and injuries as well as the negative public reaction they elicit, SRTCs and PSRTCs have become great concerns to China during recent years. The aim of this study is to identify the main factors contributing to these road traffic crashes and to propose preventive measures to reduce their number. Methods 49 contributing factors of the SRTCs and PSRTCs that occurred from 2007 to 2013 were collected from the database “In-depth Investigation and Analysis System for Major Road traffic crashes” (IIASMRTC) and were analyzed through the integrated use of principal component analysis and hierarchical clustering to determine the primary and secondary groups of contributing factors. Results Speeding and overloading of passengers were the primary contributing factors, featuring in up to 66.3% and 32.6% of accidents respectively. Two secondary contributing factors were road-related: lack of or nonstandard roadside safety infrastructure, and slippery roads due to rain, snow or ice. Conclusions The current approach to SRTCs and PSRTCs is focused on the attribution of responsibility and the enforcement of regulations considered relevant to particular SRTCs and PSRTCs. It would be more effective to investigate contributing factors and characteristics of SRTCs and PSRTCs as a whole, to provide adequate information for safety interventions in regions where SRTCs and PSRTCs are more common. In addition to mandating of a driver training program and publicisation of the hazards associated with traffic violations, implementation of speed cameras, speed signs, markings and vehicle-mounted GPS are suggested to reduce speeding of passenger vehicles, while increasing regular checks by traffic police and passenger station staff, and improving transportation management to increase income of contractors and drivers are feasible measures to prevent overloading of people. Other promising measures include regular inspection of roadside safety infrastructure, and improving skid resistance on dangerous road sections in mountainous areas.
Resumo:
Visual content is a critical component of everyday social media, on platforms explicitly framed around the visual (Instagram and Vine), on those offering a mix of text and images in myriad forms (Facebook, Twitter, and Tumblr), and in apps and profiles where visual presentation and provision of information are important considerations. However, despite being so prominent in forms such as selfies, looping media, infographics, memes, online videos, and more, sociocultural research into the visual as a central component of online communication has lagged behind the analysis of popular, predominantly text-driven social media. This paper underlines the increasing importance of visual elements to digital, social, and mobile media within everyday life, addressing the significant research gap in methods for tracking, analysing, and understanding visual social media as both image-based and intertextual content. In this paper, we build on our previous methodological considerations of Instagram in isolation to examine further questions, challenges, and benefits of studying visual social media more broadly, including methodological and ethical considerations. Our discussion is intended as a rallying cry and provocation for further research into visual (and textual and mixed) social media content, practices, and cultures, mindful of both the specificities of each form, but also, and importantly, the ongoing dialogues and interrelations between them as communication forms.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
"We thank MrGilder for his considered comments and suggestions for alternative analyses of our data. We also appreciate Mr Gilder’s support of our call for larger studies to contribute to the evidence base for preoperative loading with high-carbohydrate fluids..."
The partition of unity finite element method for elastic wave propagation in Reissner-Mindlin plates
Resumo:
This paper reports a numerical method for modelling the elastic wave propagation in plates. The method is based on the partition of unity approach, in which the approximate spectral properties of the infinite dimensional system are embedded within the space of a conventional finite element method through a consistent technique of waveform enrichment. The technique is general, such that it can be applied to the Lagrangian family of finite elements with specific waveform enrichment schemes, depending on the dominant modes of wave propagation in the physical system. A four-noded element for the Reissner-indlin plate is derived in this paper, which is free of shear locking. Such a locking-free property is achieved by removing the transverse displacement degrees of freedom from the element nodal variables and by recovering the same through a line integral and a weak constraint in the frequency domain. As a result, the frequency-dependent stiffness matrix and the mass matrix are obtained, which capture the higher frequency response with even coarse meshes, accurately. The steps involved in the numerical implementation of such element are discussed in details. Numerical studies on the performance of the proposed element are reported by considering a number of cases, which show very good accuracy and low computational cost. Copyright (C)006 John Wiley & Sons, Ltd.