963 resultados para Link variable method
Resumo:
Most studies analysing the infrastructure impact on regional growth show a positive relationship between both variables. However, the public capital elasticity estimated in a Cobb-Douglas function, which is the most common specification in these works, is sometimes too big to be credible, so that the results have been partially desestimated. In the present paper, we give some new advances on the real link between public capital and productivity for the Spanish regions in the period 1964-1991. Firstly, we find out that the association for both variables is smaller when controlling for regional effects, being industry the sector which reaps the most benefits from an increase in the infrastructural dotation. Secondly, concerning to the rigidity of the Cobb-Douglas function, it is surpassed by using the variable expansion method. The expanded functional form reveals both the absence of a direct effect of infrastructure and the fact that the link between infrastructure and growth depends on the level of the existing stock (threshold level) and the way infrastructure is articulated in its location relative to other factors. Finally, we analyse the importance of the spatial dimension in infrastructure impact, due to spillover effects. In this sense, the paper provides evidence of the existence of spatial autocorrelation processes that may invalidate previous results.
Resumo:
Most studies analysing the infrastructure impact on regional growth show a positive relationship between both variables. However, the public capital elasticity estimated in a Cobb-Douglas function, which is the most common specification in these works, is sometimes too big to be credible, so that the results have been partially desestimated. In the present paper, we give some new advances on the real link between public capital and productivity for the Spanish regions in the period 1964-1991. Firstly, we find out that the association for both variables is smaller when controlling for regional effects, being industry the sector which reaps the most benefits from an increase in the infrastructural dotation. Secondly, concerning to the rigidity of the Cobb-Douglas function, it is surpassed by using the variable expansion method. The expanded functional form reveals both the absence of a direct effect of infrastructure and the fact that the link between infrastructure and growth depends on the level of the existing stock (threshold level) and the way infrastructure is articulated in its location relative to other factors. Finally, we analyse the importance of the spatial dimension in infrastructure impact, due to spillover effects. In this sense, the paper provides evidence of the existence of spatial autocorrelation processes that may invalidate previous results.
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
The link between the Pacific/North American pattern (PNA) and the North Atlantic Oscillation (NAO) is investigated in reanalysis data (NCEP, ERA40) and multi-century CGCM runs for present day climate using three versions of the ECHAM model. PNA and NAO patterns and indices are determined via rotated principal component analysis on monthly mean 500 hPa geopotential height fields using the varimax criteria. On average, the multi-century CGCM simulations show a significant anti-correlation between PNA and NAO. Further, multi-decadal periods with significantly enhanced (high anti-correlation, active phase) or weakened (low correlations, inactive phase) coupling are found in all CGCMs. In the simulated active phases, the storm track activity near Newfoundland has a stronger link with the PNA variability than during the inactive phases. On average, the reanalysis datasets show no significant anti-correlation between PNA and NAO indices, but during the sub-period 1973–1994 a significant anti-correlation is detected, suggesting that the present climate could correspond to an inactive period as detected in the CGCMs. An analysis of possible physical mechanisms suggests that the link between the patterns is established by the baroclinic waves forming the North Atlantic storm track. The geopotential height anomalies associated with negative PNA phases induce an increased advection of warm and moist air from the Gulf of Mexico and cold air from Canada. Both types of advection contribute to increase baroclinicity over eastern North America and also to increase the low level latent heat content of the warm air masses. Thus, growth conditions for eddies at the entrance of the North Atlantic storm track are enhanced. Considering the average temporal development during winter for the CGCM, results show an enhanced Newfoundland storm track maximum in the early winter for negative PNA, followed by a downstream enhancement of the Atlantic storm track in the subsequent months. In active (passive) phases, this seasonal development is enhanced (suppressed). As the storm track over the central and eastern Atlantic is closely related to the NAO variability, this development can be explained by the shift of the NAO index to more positive values.
Resumo:
Capacity dimensioning is one of the key problems in wireless network planning. Analytical and simulation methods are usually used to pursue the accurate capacity dimensioning of wireless network. In this paper, an analytical capacity dimensioning method for WCDMA with high speed wireless link is proposed based on the analysis on relations among system performance and high speed wireless transmission technologies, such as H-ARQ, AMC and fast scheduling. It evaluates system capacity in closed-form expressions from link level and system level. Numerical results show that the proposed method can calculate link level and system level capacity for WCDMA system with HSDPA and HSUPA.
Resumo:
Common Variable Immunodeficiency (CVID) is a primary immunodeficiency disease characterized by defective immunoglobulin production and often associated with autoimmunity. We used flow cytometry to analyze CD4(+)CD25(HIGH)FOXP3(+) T regulatory (Treg) cells and ask whether perturbations in their frequency in peripheral blood could underlie the high incidence of autoimmune disorders in CVID patients. In this study, we report for the first time that CVID patients with autoimmune disease have a significantly reduced frequency of CD4(+)CD25(HIGH)FOXP3(+) cells in their peripheral blood accompanied by a decreased intensity of FOXP3 expression. Notably, although CVID patients in whom autoimmunity was not diagnosed had a reduced frequency of CD4(+)CD25(HIGH)FOXP3(+) cells, FOXP3 expression levels did not differ from those in healthy controls. In conclusion, these data suggest compromised homeostasis of CD4(+)CD25(HIGH)FOXP3(+) cells in a subset of CVID patients with autoimmunity, and may implicate Treg cells in pathological mechanisms of CVID. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the epsilon(k)-global minimization of the Augmented Lagrangian with simple constraints, where epsilon(k) -> epsilon. Global convergence to an epsilon-global minimizer of the original problem is proved. The subproblems are solved using the alpha BB method. Numerical experiments are presented.
Resumo:
Let us have an indirectly measurable variable which is a function of directly measurable variables. In this survey we present the introduced by us method for analytical representation of its maximum absolute and relative inaccuracy as functions, respectively, of the maximum absolute and of the relative inaccuracies of the directly measurable variables. Our new approach consists of assuming for fixed variables the statistical mean values of the absolute values of the coefficients of influence, respectively, of the absolute and relative inaccuracies of the directly measurable variables in order to determine the analytical form of the maximum absolute and relative inaccuracies of an indirectly measurable variable. Moreover, we give a method for determining the numerical values of the maximum absolute and relative inaccuracies. We define a sample plane of the ideal perfectly accurate experiment and using it we give a universal numerical characteristic – a dimensionless scale for determining the quality (accuracy) of the experiment.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Resumo:
The strategy used to treat HCV infection depends on the genotype involved. An accurate and reliable genotyping method is therefore of paramount importance. We describe here, for the first time, the use of a liquid microarray for HCV genotyping. This liquid microarray is based on the 5'UTR - the most highly conserved region of HCV - and the variable region NS5B sequence. The simultaneous genotyping of two regions can be used to confirm findings and should detect inter-genotypic recombination. Plasma samples from 78 patients infected with viruses with genotypes and subtypes determined in the Versant (TM) HCV Genotype Assay LiPA (version I; Siemens Medical Solutions, Diagnostics Division, Fernwald, Germany) were tested with our new liquid microarray method. This method successfully determined the genotypes of 74 of the 78 samples previously genotyped in the Versant (TM) HCV Genotype Assay LiPA (74/78, 95%). The concordance between the two methods was 100% for genotype determination (74/74). At the subtype level, all 3a and 2b samples gave identical results with both methods (17/17 and 7/7, respectively). Two 2c samples were correctly identified by microarray, but could only be determined to the genotype level with the Versant (TM) HCV assay. Genotype ""1'' subtypes (1a and 1b) were correctly identified by the Versant (TM) HCV assay and the microarray in 68% and 40% of cases, respectively. No genotype discordance was found for any sample. HCV was successfully genotyped with both methods, and this is of prime importance for treatment planning. Liquid microarray assays may therefore be added to the list of methods suitable for HCV genotyping. It provides comparable results and may readily be adapted for the detection of other viruses frequently co-infecting HCV patients. Liquid array technology is thus a reliable and promising platform for HCV genotyping.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The aim objective of this project was to evaluate the protein extraction of soybean flour in dairy whey, by the multivariate statistical method with 2(3) experiments. Influence of three variables were considered: temperature, pH and percentage of sodium chloride against the process specific variable ( percentage of protein extraction). It was observed that, during the protein extraction against time and temperature, the treatments at 80 degrees C for 2h presented great values of total protein (5.99%). The increasing for the percentage of protein extraction was major according to the heating time. Therefore, the maximum point from the function that represents the protein extraction was analysed by factorial experiment 2(3). By the results, it was noted that all the variables were important to extraction. After the statistical analyses, was observed that the parameters as pH, temperature, and percentage of sodium chloride, did not sufficient for the extraction process, since did not possible to obtain the inflection point from mathematical function, however, by the other hand, the mathematical model was significant, as well as, predictive.
Resumo:
A method based on a specific power-law relationship between the hydraulic head and the Boltzmann variable was recently presented. We generalized this relationship to a range of powers and extended the solution to include the saturated zone. As a result, the new solution satisfies the Bruce and Klute equation exactly.
Resumo:
Purpose: The diagnosis of prostate cancer in men with persistently increased prostate specific antigen after a negative prostate biopsy has become a great challenge for urologists and pathologists. We analyzed the diagnostic value of 6 genes in the tissue of patients with prostate cancer. Materials and Methods: The study was comprised of 50 patients with localized disease who underwent radical prostatectomy. Gene selection was based on a previous microarray analysis. Among 4,147 genes with different expressions between 2 pools of patients 6 genes (PSMA, TMEFF2, GREB1, TH1L, IgH3 and PGC) were selected. These genes were tested for diagnostic value using the quantitative reverse transcription polymerase chain reaction method. Initially malignant tissue samples from 33 patients were analyzed and in the second part of the study we analyzed benign tissue samples from the other 17 patients with prostate cancer. The control group was comprised of tissue samples of patients with benign prostatic hyperplasia. Results: Analysis of malignant prostatic tissue demonstrated that prostate specific membrane antigen was over expressed (mean 9 times) and pepsinogen C was under expressed (mean 1.3 X 10(-4) times) in all cases compared to benign prostatic hyperplasia. The other 4 tested genes showed a variable expression pattern not allowing for differentiation between benign and malignant cases. When we tested these results in the benign prostate tissues from patients with cancer, pepsinogen C maintained the expression pattern. In terms of prostate specific membrane antigen, despite over expression in most cases (mean 12 times), 2 cases (12%) presented with under expression. Conclusions: Pepsinogen C tissue expression may constitute a powerful adjunctive method to prostate biopsy in the diagnosis of prostate cancer cases.