943 resultados para RESIDUAL ANALYSIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mounting accuracy of satellite payload and ADCS (attitude determination and control subsystem) seats is one of the requirements to achieve the satellite mission with acceptable performance. Components of mounting inaccuracy are technological inaccuracies, residual plastic deformations after loading (during transportation and orbital insertion), elastic deformations, and thermal deformations during orbital operation. This paper focuses on estimation of thermal deformations of satellite structure. Thermal analysis is executed by applying finite-difference method (IDEAS) and temperature profile for satellite components case is evaluated. Then, Perform thermal finite-element analysis applying the finite-difference model results as boundary conditions; and calculate the resultant thermal strain. Next, applying the resultant thermal strain, perform finite-element structure analysis to evaluate structure deformations at the payload and ADCS equipments seats.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable detection of JAK2-V617F is critical for accurate diagnosis of myeloproliferative neoplasms (MPNs); in addition, sensitive mutation-specific assays can be applied to monitor disease response. However, there has been no consistent approach to JAK2-V617F detection, with assays varying markedly in performance, affecting clinical utility. Therefore, we established a network of 12 laboratories from seven countries to systematically evaluate nine different DNA-based quantitative PCR (qPCR) assays, including those in widespread clinical use. Seven quality control rounds involving over 21,500 qPCR reactions were undertaken using centrally distributed cell line dilutions and plasmid controls. The two best-performing assays were tested on normal blood samples (n=100) to evaluate assay specificity, followed by analysis of serial samples from 28 patients transplanted for JAK2-V617F-positive disease. The most sensitive assay, which performed consistently across a range of qPCR platforms, predicted outcome following transplant, with the mutant allele detected a median of 22 weeks (range 6-85 weeks) before relapse. Four of seven patients achieved molecular remission following donor lymphocyte infusion, indicative of a graft vs MPN effect. This study has established a robust, reliable assay for sensitive JAK2-V617F detection, suitable for assessing response in clinical trials, predicting outcome and guiding management of patients undergoing allogeneic transplant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: To build a population pharmacokinetic model that describes the apparent clearance of tacrolimus and the potential demographic, clinical and genetically controlled factors that could lead to inter-patient pharmacokinetic variability within children following liver transplantation.

Methods: The present study retrospectively examined tacrolimus whole blood pre-dose concentrations (n = 628) of 43 children during their first year post-liver transplantation. Population pharmacokinetic analysis was performed using the non-linear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance and influential covariates.

Results: The final model identified time post-transplantation and CYP3A5*1 allele as influential covariates on tacrolimus apparent clearance according to the following equation:

TVCL=12.9×(Weight /13.2)0.75×EXP(-0.00158×TPT)×EXP(0.428×CYP3A5)

where TVCL is the typical value for apparent clearance, TPT is time post-transplantation in days and the CYP3A5 is 1 where*1 allele is present and 0 otherwise. The population estimate and inter-individual variability (%CV) of tacrolimus apparent clearance were found to be 0.977 l h kg (95% CI 0.958, 0.996) and 40.0%, respectively, while the residual variability between the observed and predicted concentrations was 35.4%.

Conclusion: Tacrolimus apparent clearance was influenced by time post-transplantation and CYP3A5 genotypes. The results of this study, once confirmed by a large scale prospective study, can be used in conjunction with therapeutic drug monitoring to recommend tacrolimus dose adjustments that take into account not only body weight but also genetic and time-related changes in tacrolimus clearance. © 2013 The British Pharmacological Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of gamma-H2AX foci in blood lymphocytes is a promising approach for rapid dose estimation to support patient triage after a radiation accident but has one major drawback: the rapid decline of foci levels post-exposure cause major uncertainties in situations where the exact timing between exposure and blood sampling is unknown. To address this issue, radiation-induced apoptosis (RIA) in lymphocytes was investigated using fluorogenic inhibitors of caspases (FLICA) as an independent biomarker for radiation exposure, which may complement the gamma-H2AX assay. Ex vivo X-irradiated peripheral blood lymphocytes from 17 volunteers showed dose-and time-dependent increases in radiation-induced apoptosis over the first 3 days after exposure, albeit with considerable interindividual variation. Comparison with gamma-H2AX and 53BP1 foci counts suggested an inverse correlation between numbers of residual foci and radiation-induced apoptosis in lymphocytes at 24 h postirradiation (P = 0.007). In T-helper (CD4), T-cytotoxic (CD8) and B-cells (CD19), some significant differences in radiation induced DSBs or apoptosis were observed, however no correlation between foci and apoptosis in lymphocyte subsets was observed at 24 h postirradiation. While gamma-H2AX and 53BP1 foci were rapidly induced and then repaired after exposure, radiation-induced apoptosis did not become apparent until 24 h after exposure. Data from six volunteers with different ex vivo doses and post-exposure times were used to test the capability of the combined assay. Results show that simultaneous analysis of gamma-H2AX and radiation-induced apoptosis may provide a rapid and more accurate triage tool in situations where the delay between exposure and blood sampling is unknown compared to gamma-H2AX alone. This combined approach may improve the accuracy of dose estimations in cases where blood sampling is performed days after the radiation exposure. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radio-frequency (RF) impairments, which intimately exist in wireless communication systems, can severely limit the performance of multiple-input-multiple-output (MIMO) systems. Although we can resort to compensation schemes to mitigate some of these impairments, a certain amount of residual impairments always persists. In this paper, we consider a training-based point-to-point MIMO system with residual transmit RF impairments (RTRI) using spatial multiplexing transmission. Specifically, we derive a new linear channel estimator for the proposed model, and show that RTRI create an estimation error floor in the high signal-to-noise ratio (SNR) regime. Moreover, we derive closed-form expressions for the signal-to-noise-plus-interference ratio (SINR) distributions, along with analytical expressions for the ergodic achievable rates of zero-forcing, maximum ratio combining, and minimum mean-squared error receivers, respectively. In addition, we optimize the ergodic achievable rates with respect to the training sequence length and demonstrate that finite dimensional systems with RTRI generally require more training at high SNRs than those with ideal hardware. Finally, we extend our analysis to large-scale MIMO configurations, and derive deterministic equivalents of the ergodic achievable rates. It is shown that, by deploying large receive antenna arrays, the extra training requirements due to RTRI can be eliminated. In fact, with a sufficiently large number of receive antennas, systems with RTRI may even need less training than systems with ideal hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wilms' tumor gene 1 (WT1) is overexpressed in the majority (70-90%) of acute leukemias and has been identified as an independent adverse prognostic factor, a convenient minimal residual disease (MRD) marker and potential therapeutic target in acute leukemia. We examined WT1 expression patterns in childhood acute lymphoblastic leukemia (ALL), where its clinical implication remains unclear. Using a real-time quantitative PCR designed according to Europe Against Cancer Program recommendations, we evaluated WT1 expression in 125 consecutively enrolled patients with childhood ALL (106 BCP-ALL, 19 T-ALL) and compared it with physiologic WT1 expression in normal and regenerating bone marrow (BM). In childhood B-cell precursor (BCP)-ALL, we detected a wide range of WT1 levels (5 logs) with a median WT1 expression close to that of normal BM. WT1 expression in childhood T-ALL was significantly higher than in BCP-ALL (P<0.001). Patients with MLL-AF4 translocation showed high WT1 overexpression (P<0.01) compared to patients with other or no chromosomal aberrations. Older children (> or =10 years) expressed higher WT1 levels than children under 10 years of age (P<0.001), while there was no difference in WT1 expression in patients with peripheral blood leukocyte count (WBC) > or =50 x 10(9)/l and lower. Analysis of relapsed cases (14/125) indicated that an abnormal increase or decrease in WT1 expression was associated with a significantly increased risk of relapse (P=0.0006), and this prognostic impact of WT1 was independent of other main risk factors (P=0.0012). In summary, our study suggests that WT1 expression in childhood ALL is very variable and much lower than in AML or adult ALL. WT1, thus, will not be a useful marker for MRD detection in childhood ALL, however, it does represent a potential independent risk factor in childhood ALL. Interestingly, a proportion of childhood ALL patients express WT1 at levels below the normal physiological BM WT1 expression, and this reduced WT1 expression appears to be associated with a higher risk of relapse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chimaerism was assessed in five recipients following sex mismatched allogeneic bone marrow transplantation. Techniques included karyotyping of bone marrow cells, dot blot DNA analysis of blood and bone marrow suspensions, and in vitro amplification of DNA by the polymerase chain reaction (PCR) using blood and bone marrow suspensions and stored bone marrow slides. Results of karyotypic analysis suggested complete chimaerism in four patients, while in one patient mixed chimaerism was detected. Mixed chimaerism was also detected, however, in a second patient using PCR and confirmed by dot blot analysis on all tissues examined. PCR is a sensitive tool for investigation of chimaerism following bone marrow transplantation. Since this technique does not require radioactivity, it is an attractive method for use in a clinical laboratory. This technique represents a further development in the use of DNA methodologies in the assessment of haematological disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to monitor a set of physical-chemical properties of heavy oil procedural streams through nuclear magnetic resonance spectroscopy, in order to propose an analysis procedure and online data processing for process control. Different statistical methods which allow to relate the results obtained by nuclear magnetic resonance spectroscopy with the results obtained by the conventional standard methods during the characterization of the different streams, have been implemented in order to develop models for predicting these same properties. The real-time knowledge of these physical-chemical properties of petroleum fractions is very important for enhancing refinery operations, ensuring technically, economically and environmentally proper refinery operations. The first part of this work involved the determination of many physical-chemical properties, at Matosinhos refinery, by following some standard methods important to evaluate and characterize light vacuum gas oil, heavy vacuum gas oil and fuel oil fractions. Kinematic viscosity, density, sulfur content, flash point, carbon residue, P-value and atmospheric and vacuum distillations were the properties analysed. Besides the analysis by using the standard methods, the same samples were analysed by nuclear magnetic resonance spectroscopy. The second part of this work was related to the application of multivariate statistical methods, which correlate the physical-chemical properties with the quantitative information acquired by nuclear magnetic resonance spectroscopy. Several methods were applied, including principal component analysis, principal component regression, partial least squares and artificial neural networks. Principal component analysis was used to reduce the number of predictive variables and to transform them into new variables, the principal components. These principal components were used as inputs of the principal component regression and artificial neural networks models. For the partial least squares model, the original data was used as input. Taking into account the performance of the develop models, by analysing selected statistical performance indexes, it was possible to conclude that principal component regression lead to worse performances. When applying the partial least squares and artificial neural networks models better results were achieved. However, it was with the artificial neural networks model that better predictions were obtained for almost of the properties analysed. With reference to the results obtained, it was possible to conclude that nuclear magnetic resonance spectroscopy combined with multivariate statistical methods can be used to predict physical-chemical properties of petroleum fractions. It has been shown that this technique can be considered a potential alternative to the conventional standard methods having obtained very promising results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Repeated antimalarial treatment for febrile episodes and self-treatment are common in malaria-endemic areas. The intake of antimalarials prior to participating in an in vivo study may alter treatment outcome and affect the interpretation of both efficacy and safety outcomes. We report the findings from baseline plasma sampling of malaria patients prior to inclusion into an in vivo study in Tanzania and discuss the implications of residual concentrations of antimalarials in this setting. In an in vivo study conducted in a rural area of Tanzania in 2008, baseline plasma samples from patients reporting no antimalarial intake within the last 28 days were screened for the presence of 14 antimalarials (parent drugs or metabolites) using liquid chromatography-tandem mass spectrometry. Among the 148 patients enrolled, 110 (74.3%) had at least one antimalarial in their plasma: 80 (54.1%) had lumefantrine above the lower limit of calibration (LLC = 4 ng/mL), 7 (4.7%) desbutyl-lumefantrine (4 ng/mL), 77 (52.0%) sulfadoxine (0.5 ng/mL), 15 (10.1%) pyrimethamine (0.5 ng/mL), 16 (10.8%) quinine (2.5 ng/mL) and none chloroquine (2.5 ng/mL). The proportion of patients with detectable antimalarial drug levels prior to enrollment into the study is worrying. Indeed artemether-lumefantrine was supposed to be available only at government health facilities. Although sulfadoxine-pyrimethamine is only recommended for intermittent preventive treatment in pregnancy (IPTp), it was still widely used in public and private health facilities and sold in drug shops. Self-reporting of previous drug intake is unreliable and thus screening for the presence of antimalarial drug levels should be considered in future in vivo studies to allow for accurate assessment of treatment outcome. Furthermore, persisting sub-therapeutic drug levels of antimalarials in a population could promote the spread of drug resistance. The knowledge on drug pressure in a given population is important to monitor standard treatment policy implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bovine adenovirus type 3 (BAV3) is a medium size DNA virus that causes respiratory and gastrointestinal disorders in cattle. The viral genome consists of a 35,000 base pair, linear, double-stranded DNA molecule with inverted terminal repeats and a 55 kilodalton protein covalently linked to each of the 5' ends. In this study, the viral genome was cloned in the form of subgenomic restriction fragments. Five EcoRI internal fragments spanning 3.4 to 89.0 % and two Xb a I internal fragments spanning 35.7 to 82.9 % of the viral genome were cloned into the EcoRI and Xbal sites of the bacterial vector pUC19. To generate overlap between cloned fragments, ten Hi n dIll internal fragments spanning 3.9 to 84.9 and 85.5 to 96% and two BAV3 BamHI internal fragments spanning 59.8 to 84.9% of the viral genome were cloned into the HindllI and BamHI sites of pUC19. The HindlII cloning strategy also resulted in six recombinant plasmids carrying two or more Hi ndII I fragments. These fragments provided valuable information on the linear orientation of the cloned fragments within the viral genome. Cloning of the terminal fragments required the removal of the residual peptides that remain attached to the 5' ends of the genome. This was accomplished by alkaline hydrolysis of the DNA-peptide bond. BamH I restriction fragments of the peptide-free DNA were cloned into pUC19 and resulted in two plasmids carrying the BAV3 Bam HI terminal fragments spanning 0 to 53.9% and 84.9 to 100% of the viral genome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple, low-cost concentric capillary nebulizer (CCN) was developed and evaluated for ICP spectrometry. The CCN could be operated at sample uptake rates of 0.050-1.00 ml min'^ and under oscillating and non-oscillating conditions. Aerosol characteristics for the CCN were studied using a laser Fraunhofter diffraction analyzer. Solvent transport efficiencies and transport rates, detection limits, and short- and long-term stabilities were evaluated for the CCN with a modified cyclonic spray chamber at different sample uptake rates. The Mg II (280.2nm)/l\/lg 1(285.2nm) ratio was used for matrix effect studies. Results were compared to those with conventional nebulizers, a cross-flow nebulizer with a Scott-type spray chamber, a GemCone nebulizer with a cyclonic spray chamber, and a Meinhard TR-30-K3 concentric nebulizer with a cyclonic spray chamber. Transport efficiencies of up to 57% were obtained for the CCN. For the elements tested, short- and long-term precisions and detection limits obtained with the CCN at 0.050-0.500 ml min'^ are similar to, or better than, those obtained on the same instrument using the conventional nebulizers (at 1.0 ml min'^). The depressive and enhancement effects of easily ionizable element Na, sulfuric acid, and dodecylamine surfactant on analyte signals with the CCN are similar to, or better than, those obtained with the conventional nebulizers. However, capillary clog was observed when the sample solution with high dissolved solids was nebulized for more than 40 min. The effects of data acquisition and data processing on detection limits were studied using inductively coupled plasma-atomic emission spectrometry. The study examined the effects of different detection limit approaches, the effects of data integration modes, the effects of regression modes, the effects of the standard concentration range and the number of standards, the effects of sample uptake rate, and the effect of Integration time. All the experiments followed the same protocols. Three detection limit approaches were examined, lUPAC method, the residual standard deviation (RSD), and the signal-to-background ratio and relative standard deviation of the background (SBR-RSDB). The study demonstrated that the different approaches, the integration modes, the regression methods, and the sample uptake rates can have an effect on detection limits. The study also showed that the different approaches give different detection limits and some methods (for example, RSD) are susceptible to the quality of calibration curves. Multicomponents spectral fitting (MSF) gave the best results among these three integration modes, peak height, peak area, and MSF. Weighted least squares method showed the ability to obtain better quality calibration curves. Although an effect of the number of standards on detection limits was not observed, multiple standards are recommended because they provide more reliable calibration curves. An increase of sample uptake rate and integration time could improve detection limits. However, an improvement with increased integration time on detection limits was not observed because the auto integration mode was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper : a) the consumer’s problem is studied over two periods, the second one involving S states, and the consumer being endowed with S+1 incomes and having access to N financial assets; b) the consumer is then representable by a continuously differentiable system of demands, commodity demands, asset demands and desirabilities of incomes (the S+1 Lagrange multiplier of the S+1 constraints); c) the multipliers can be transformed into subjective Arrow prices; d) the effects of the various incomes on these Arrow prices decompose into a compensation effect (an Antonelli matrix) and a wealth effect; e) the Antonelli matrix has rank S-N, the dimension of incompleteness, if the consumer can financially adjust himself when facing income shocks; f) the matrix has rank S, if not; g) in the first case, the matrix represents a residual aversion; in the second case, a fundamental aversion; the difference between them is an aversion to illiquidity; this last relation corresponds to the Drèze-Modigliani decomposition (1972); h) the fundamental aversion decomposes also into an aversion to impatience and a risk aversion; i) the above decompositions span a third decomposition; if there exists a sure asset (to be defined, the usual definition being too specific), the fundamental aversion admits a three-component decomposition, an aversion to impatience, a residual aversion and an aversion to the illiquidity of risky assets; j) the formulas of the corresponding financial premiums are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.