945 resultados para Linear equation with two unknowns


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To determine the pharmacodynamic (PD) profile of serum total testosterone levels (TT) and luteinizing hormone (LH) in men with secondary hypogonadism following initial and chronic daily oral doses of enclomiphene citrate in comparison to transdermal testosterone. To determine the effects of daily oral doses of enclomiphene citrate (Androxal®) in comparison to transdermal testosterone on other hormones and markers in men with secondary hypogonadism. PATIENTS AND METHODS: This was a randomized, single blind, two-center phase II study to evaluate three different doses of enclomiphene citrate (6.25mg, 12.5mg and 25 mg Androxal®), versus AndroGel®, a transdermal testosterone, on 24-hour LH and TT in otherwise normal healthy men with secondary hypogonadism. Forty-eight men were enrolled in the trial (ITT Population), but 4 men had T levels >350 ng/dL at baseline. Forty-four men completed the study per protocol (PP population). All subjects enrolled in this trial had serum TT in the low range (<350 ng/dL) and had low to normal LH (<12 IU/L) on at least two occasions. TT and LH levels were assessed each hour for 24 hours to examine the effects at each of three treatment doses of enclomiphene versus a standard dose (5 grams) of transdermal testosterone (AndroGel). In the initial profile TT and LH were determined in a naïve population following a single initial oral or transdermal treatment (Day 1). This was contrasted to that seen after six weeks of continuous daily oral or transdermal treatment (Day 42). The pharmacokinetics of enclomiphene was performed in a select subpopulation. Serum samples were obtained over the course of the study to determine levels of various hormones and lipids. RESULTS: After six weeks of continuous use, the mean ± SD concentration of TT at Day 42 C0hrTT, was 604 ± 160 ng/dL for men taking the highest of dose of enclomiphene citrate (enclomiphene, 25 mg daily) and 500 ± 278 ng in those men treated with transdermal testosterone. These values were higher than Day 1 values but not different from each other (p = 0.23, T-test). All three doses of enclomiphene increased C0hrTT, CavgTT, CmaxTT, CminTT and CrangeTT. Transdermal testosterone also raised TT, albeit with more variability, and with suppressed LH levels. The patterns of TT over 24 hour period following six weeks of dosing could be fit to a non-linear function with morning elevations, mid-day troughs, and rising night-time levels. Enclomiphene and transdermal testosterone increased levels of TT within two weeks, but they had opposite effects on FSH and LH Treatment with enclomiphene did not significantly affect levels of TSH, ACTH, cortisol, lipids, or bone markers. Both transdermal testosterone and enclomiphene citrate decreased IGF-1 levels (p<0.05) but suppression was greater in the enclomiphene citrate groups. CONCLUSIONS: Enclomiphene citrate increased serum LH and TT; however, there was not a temporal association between the peak drug levels and the Cmax levels LH or TT. Enclomiphene citrate consistently increased serum TT into the normal range and increased LH and FSH above the normal range. The effects on LH and TT persisted for at least one week after stopping treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To compare pressure–volume (P–V) curves obtained with the Galileo ventilator with those obtained with the CPAP method in patients with ALI or ARDS receiving mechanical ventilation. P–V curves were fitted to a sigmoidal equation with a mean R2 of 0.994 ± 0.003. Lower (LIP) and upper inflection (UIP), and deflation maximum curvature (PMC) points calculated from the fitted variables showed a good correlation between methods with high intraclass correlation coefficients. Bias and limits of agreement for LIP, UIP and PMC obtained with the two methods in the same patient were clinically acceptable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines the effect on price of different characteristics of holiday hotels in the sun-and-beach segment, under the hedonic function perspective. Monthly prices of the majority of hotels in the Spanish continental Mediterranean coast are gathered from May to October 1999 from the tour operator catalogues. Hedonic functions are specified as random-effect models and parametrized as structural equation models with two latent variables, a random peak season price and a random width of seasonal fluctuations. Characteristics of the hotel and the region where they are located are used as predictors of both latent variables. Besides hotel category, region, distance to the beach, availability of parking place and room equipment have an effect on peak price and also on seasonality. 3- star hotels have the highest seasonality and hotels located in the southern regions the lowest, which could be explained by a warmer climate in autumn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Drugs for inhalation are the cornerstone of therapy in obstructive lung disease. We have observed that up to 75 % of patients do not perform a correct inhalation technique. The inability of patients to correctly use their inhaler device may be a direct consequence of insufficient or poor inhaler technique instruction. The objective of this study is to test the efficacy of two educational interventions to improve the inhalation techniques in patients with Chronic Obstructive Pulmonary Disease (COPD). METHODS This study uses both a multicenter patients´ preference trial and a comprehensive cohort design with 495 COPD-diagnosed patients selected by a non-probabilistic method of sampling from seven Primary Care Centers. The participants will be divided into two groups and five arms. The two groups are: 1) the patients´ preference group with two arms and 2) the randomized group with three arms. In the preference group, the two arms correspond to the two educational interventions (Intervention A and Intervention B) designed for this study. In the randomized group the three arms comprise: intervention A, intervention B and a control arm. Intervention A is written information (a leaflet describing the correct inhalation techniques). Intervention B is written information about inhalation techniques plus training by an instructor. Every patient in each group will be visited six times during the year of the study at health care center. DISCUSSION Our hypothesis is that the application of two educational interventions in patients with COPD who are treated with inhaled therapy will increase the number of patients who perform a correct inhalation technique by at least 25 %. We will evaluate the effectiveness of these interventions on patient inhalation technique improvement, considering that it will be adequate and feasible within the context of clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soluble peptide/MHC-class-I (pMHC) multimers have recently emerged as unique reagents for the study of specific interactions between the pMHC complex and the TCR. Here, we assessed the relative binding efficiency of a panel of multimers incorporating single-alanine-substituted variants of the tumor-antigen-derived peptide MAGE-A10(254-262) to specific CTL clones displaying different functional avidity. For each individual clone, the efficiency of binding of multimers incorporating MAGE-A10 peptide variants was, in most cases, in good although not linear correlation with the avidity of recognition of the corresponding variant. In addition, we observed two types of discrepancies between efficiency of recognition and multimer binding. First, for some peptide variants, efficient multimer binding was detected in the absence of measurable effector functions. Some of these peptide variants displayed antagonist activity. Second, when comparing different clones we found clear discrepancies between the dose of peptide required to obtain half-maximal lysis in CTL assays and the binding efficiency of the corresponding multimers. These discrepancies, however, were resolved when the differential stability of the TCR/pMHC complexes was determined. For individual clones, decreased recognition correlated with increased TCR/pMHC off-rate. TCR/pMHC complexes formed by antagonist ligands displayed off-rates faster than those of TCR/pMHC complexes formed with weak agonists. In addition, when comparing different clones, the efficiency of multimer staining correlated better with relative multimer off-rates than with half-maximal lysis values. Altogether, the data presented here reconcile and extend our previous results on the impact of the kinetics of interaction of TCR with pMHC complexes on multimer binding and underline the crucial role of TCR/pMHC off-rates for the functional outcome of such interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of observer-rated scales requires that raters be trained until they have become reliable in using the scales. However, few studies properly report how training in using a given rating scale is conducted or indeed how it should be conducted. This study examined progress in interrater reliability over 6 months of training with two observer-rated scales, the Cognitive Errors Rating Scale and the Coping Action Patterns Rating Scale. The evolution of the intraclass correlation coefficients was modeled using hierarchical linear modeling. Results showed an overall training effect as well as effects of the basic training phase and of the rater calibration phase, the latter being smaller than the former. The results are discussed in terms of implications for rater training in psychotherapy research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Quantitative assessment of soil physical quality is of great importance for eco-environmental pollution and soil quality studies. In this paper, based on the S-theory, data from 16 collection sites in the Haihe River Basin in northern China were used, and the effects of soil particle size distribution and bulk density on three important indices of theS-theory were investigated on a regional scale. The relationships between unsaturated hydraulic conductivityKi at the inflection point and S values (S/hi) were also studied using two different types of fitting equations. The results showed that the polynomial equation was better than the linear equation for describing the relationships between -log Ki and -logS, and -log Kiand -log (S/hi)2; and clay content was the most important factor affecting the soil physical quality index (S). The variation in the S index according to soil clay content was able to be fitted using a double-linear-line approach, with decrease in the S index being much faster for clay content less than 20 %. In contrast, the bulk density index was found to be less important than clay content. The average S index was 0.077, indicating that soil physical quality in the Haihe River Basin was good.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: This study aimed to assess the validity of COOP charts in a general population sample, to examine whether illustrations contribute to instrument validity, and to establish general population norms. METHODS: A general population mail survey was conducted among 20-79 years old residents of the Swiss canton of Vaud. Participants were invited to complete COOP charts, the SF-36 Health Survey; they also provided data on health service use in the previous month. Two thirds of the respondents received standard COOP charts, the rest received charts without illustrations. RESULTS: Overall 1250 persons responded (54%). The presence of illustrations did not affect score distributions, except that the illustrated 'physical fitness' chart drew greater non-response (10 vs. 3%, p < 0.001). Validity tests were similar for illustrated and picture-less charts. Factor analysis yielded two principal components, corresponding to physical and mental health. Six COOP charts showed strong and nearly linear relationships with corresponding SF36 scores (all p < 0.001), demonstrating concurrent validity. Similarly, most COOP charts were associated with the use of medical services in the past month. Only the chart on 'social support' partly deviated from construct validity hypotheses. Population norms revealed a generally lower health status in women and an age-related decline in physical health. CONCLUSIONS: COOP charts can be used to assess the health status of a general population. Their validity is good, with the possible exception of the 'social support' chart. The illustrations do not affect the properties of this instrument.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method to reconstruct 3D surfaces of silicon wafers from 2D images of printed circuits taken with a scanning electron microscope. Our reconstruction method combines the physical model of the optical acquisition system with prior knowledge about the shapes of the patterns in the circuit; the result is a shape-from-shading technique with a shape prior. The reconstruction of the surface is formulated as an optimization problem with an objective functional that combines a data-fidelity term on the microscopic image with two prior terms on the surface. The data term models the acquisition system through the irradiance equation characteristic of the microscope; the first prior is a smoothness penalty on the reconstructed surface, and the second prior constrains the shape of the surface to agree with the expected shape of the pattern in the circuit. In order to account for the variability of the manufacturing process, this second prior includes a deformation field that allows a nonlinear elastic deformation between the expected pattern and the reconstructed surface. As a result, the minimization problem has two unknowns, and the reconstruction method provides two outputs: 1) a reconstructed surface and 2) a deformation field. The reconstructed surface is derived from the shading observed in the image and the prior knowledge about the pattern in the circuit, while the deformation field produces a mapping between the expected shape and the reconstructed surface that provides a measure of deviation between the circuit design models and the real manufacturing process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: (1) To quantify wear of two different denture tooth materials in vivo with two study designs, (2) to relate tooth variables to vertical loss. METHODS: Two different denture tooth materials had been used (experimental material=test; DCL=control). In study 1 (split-mouth, 6 test centers) 60 subjects received complete dentures, in study 2 (two-arm, 1 test center) 29 subjects. In study 1 the mandibular dentures were supported by implants in 33% of the subjects, in study 2 only in 3% of the subjects. Impressions of the dentures were taken and poured with improved stone at baseline and after 6, 12, 18 and 24 months. Each operator evaluated the wear subjectively. Wear analysis was carried out with a laser scanning device. Maximal vertical loss of the attrition zones was calculated for each tooth cusp and tooth. A mixed linear model was used to statistically analyse the logarithmically transformed wear data. RESULTS: Due to drop-outs and unmatchable casts, only 47 subjects of study 1 and 14 of study 2 completed the 2-year recall. Overall, 75% of all teeth present could be analysed. There was no statistically difference in the overall wear between the test and control material for either study 1 or study 2. The relative increase in wear over time was similar in both study designs. However, a strong subject effect and center effect were observed. The fixed factors included in the model (time, tooth, center, etc.) accounted for 43% of the variability, whereas the random subject effect accounted for another 30% of the variability, leaving about 28% of unexplained variability. More wear was consistently recorded in the maxillary teeth compared to the mandibular teeth and in the first molar teeth compared to the premolar teeth and the second molars. Likewise, the supporting cusps showed more wear than the non-supporting cusps. The amount of wear did not depend on whether or not the lower dentures were supported by implants. The subjective wear was correct in about 67% of the cases if it is postulated that a wear difference of 100μm should be subjectively detectable. SIGNIFICANCE: The clinical wear of denture teeth is highly variable with a strong patient effect. More wear can be expected in maxillary denture teeth compared to mandibular teeth, first molars compared to premolars and supported cusps compared to non-supported cusps. Laboratory data on the wear of denture tooth materials may not be confirmed in well-structured clinical trials probably due to the large inter-individual variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to evaluate the effect of pelletized or extruded diets, with different levels of carbohydrate and lipid, on the gastrointestinal transit time (GITT) and its modulation in pacu (Piaractus mesopotamicus). One hundred and eighty pacu juveniles were fed with eight isonitrogenous diets containing two carbohydrate levels (40 and 50%) and two lipid levels (4 and 8%). Four diets were pelletized and four were extruded. Carbohydrate and lipid experimental levels caused no changes to the bolus transit time. However, the bolus permanence time was related to diet processing. Fish fed pelletized diets exhibited the highest gastrointestinal transit time. Regression analysis of bolus behavior for pelletized and extruded diets with 4% lipid depicted different fits. GITT regression analysis of fish fed 8% lipid was fitted to a cubic equation and displayed adjustments of food permanence, with enhanced utilization of the diets, either with extruded or pelletized diets. GITT of fish fed extruded diets with 4% lipid was adjusted to a linear equation. The GITT of pacu depends on the diet processing and is affected by dietary levels of lipid and carbohydrate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.