989 resultados para Calculation-based


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase of building pathologies related to the use of stone materials and the use of ventilated stone veneers, requires the reformulation of design concepts in building façades and also the reformulation of the architectural project. The aim of this paper is to identify, analyze and evaluate synthetically building pathologies in stone ventilated façades in order to obtain the main technical conditions to be considered in the architectural design, by interpreting its mechanical behavior and capabilities to prevent such pathologies and to ensure the proper features during the building lifetime. The methodology is based on both laboratory stone tests and in situ tests about construction systems, by analyzing physical and mechanical behavior of the outer layer in relation to other building requirements. The results imply the need of proper sizing, specific quality control and practical application of calculation methods, to control high concentration pressures in ventilated façades by reaching appropriate project solutions. In conclusion, the research about different pathologies of stone ventilated façades, the study of their mechanical behavior, their anchorage and their connection with their constructive aspects, will help to improve the construction quality of the stone ventilated façade in buildings and to enhance the use of natural stone in modern architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One saw previously that indications of diversity IT and the one of Shannon permits to characterize globally by only one number one fundamental aspects of the text structure. However a more precise knowledge of this structure requires specific abundance distributions and the use, to represent this one, of a suitable mathematical model. Among the numerous models that would be either susceptible to be proposed, the only one that present a real convenient interest are simplest. One will limit itself to study applied three of it to the language L(MT): the log-linear, the log-normal and Mac Arthur's models very used for the calculation of the diversity of the species of ecosystems, and used, we believe that for the first time, in the calculation of the diversity of a text written in a certain language, in our case L(MT). One will show advantages and inconveniences of each of these model types, methods permitting to adjust them to text data and in short tests that permit to decide if this adjustment is acceptable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stable similarity reduction of a nonsymmetric square matrix to tridiagonal form has been a long-standing problem in numerical linear algebra. The biorthogonal Lanczos process is in principle a candidate method for this task, but in practice it is confined to sparse matrices and is restarted periodically because roundoff errors affect its three-term recurrence scheme and degrade the biorthogonality after a few steps. This adds to its vulnerability to serious breakdowns or near-breakdowns, the handling of which involves recovery strategies such as the look-ahead technique, which needs a careful implementation to produce a block-tridiagonal form with unpredictable block sizes. Other candidate methods, geared generally towards full matrices, rely on elementary similarity transformations that are prone to numerical instabilities. Such concomitant difficulties have hampered finding a satisfactory solution to the problem for either sparse or full matrices. This study focuses primarily on full matrices. After outlining earlier tridiagonalization algorithms from within a general framework, we present a new elimination technique combining orthogonal similarity transformations that are stable. We also discuss heuristics to circumvent breakdowns. Applications of this study include eigenvalue calculation and the approximation of matrix functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method is introduced to determine the nuclear magnetic resonance frequency of a donor (P-31) doped inside a silicon substrate under the influence of an applied electric field. This phosphorus donor has been suggested for operation as a qubit for the realization of a solid-state scalable quantum computer. The operation of the qubit is achieved by a combination of the rotation of the phosphorus nuclear spin through a globally applied magnetic field and the selection of the phosphorus nucleus through a locally applied electric field. To realize the selection function, it is required to know the relationship between the applied electric field and the change of the nuclear magnetic resonance frequency of phosphorus. In this study, based on the wave functions obtained by the effective-mass theory, we introduce an empirical correction factor to the wave functions at the donor nucleus. Using the corrected wave functions, we formulate a first-order perturbation theory for the perturbed system under the influence of an electric field. In order to calculate the potential distributions inside the silicon and the silicon dioxide layers due to the applied electric field, we use the multilayered Green's functions and solve an integral equation by the moment method. This enables us to consider more realistic, arbitrary shape, and three-dimensional qubit structures. With the calculation of the potential distributions, we have investigated the effects of the thicknesses of silicon and silicon dioxide layers, the relative position of the donor, and the applied electric field on the nuclear magnetic resonance frequency of the donor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A location-based search engine must be able to find and assign proper locations to Web resources. Host, content and metadata location information are not sufficient to describe the location of resources as they are ambiguous or unavailable for many documents. We introduce target location as the location of users of Web resources. Target location is content-independent and can be applied to all types of Web resources. A novel method is introduced which uses log files and IN to track the visitors of websites. The experiments show that target location can be calculated for almost all documents on the Web at country level and to the majority of them in state and city levels. It can be assigned to Web resources as a new definition and dimension of location. It can be used separately or with other relevant locations to define the geography of Web resources. This compensates insufficient geographical information on Web resources and would facilitate the design and development of location-based search engines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a vision and a proposal for using Semantic Web technologies in the organic food industry. This is a very knowledge intensive industry at every step from the producer, to the caterer or restauranteur, through to the consumer. There is a crucial need for a concept of environmental audit which would allow the various stake holders to know the full environmental impact of their economic choices. This is a di?erent and parallel form of knowledge to that of price. Semantic Web technologies can be used e?ectively for the calculation and transfer of this type of knowledge (together with other forms of multimedia data) which could contribute considerably to the commercial and educational impact of the organic food industry. We outline how this could be achieved as our essential ob jective is to show how advanced technologies could be used to both reduce ecological impact and increase public awareness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report a near-ideal in-fiber polarizer implemented by use of 45° tilted fiber Bragg grating structures that are UV inscribed in hydrogenated Ge-doped fiber. We demonstrate a polarization-extinction ratio of 33 dB over a 100-nm operation range near 1550 nm. We further show an achievement of 99.5% degree of polarization for unpolarized light with these gratings. We also theoretically investigate tilted grating structures based on the Green's function calculation, therein revealing the unique polarization characteristics, which are in excellent agreement with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate theoretically three previously published formulae that use intra-operative aphakic refractive error to calculate intraocular lens (IOL) power, not necessitating pre-operative biometry. The formulae are as follows: IOL power (D) = Aphakic refraction x 2.01 [Ianchulev et al., J. Cataract Refract. Surg.31 (2005) 1530]; IOL power (D) = Aphakic refraction x 1.75 [Mackool et al., J. Cataract Refract. Surg.32 (2006) 435]; IOL power (D) = 0.07x(2) + 1.27x + 1.22, where x = aphakic refraction [Leccisotti, Graefes Arch. Clin. Exp. Ophthalmol.246 (2008) 729]. METHODS: Gaussian first order calculations were used to determine the relationship between intra-operative aphakic refractive error and the IOL power required for emmetropia in a series of schematic eyes incorporating varying corneal powers, pre-operative crystalline lens powers, axial lengths and post-operative IOL positions. The three previously published formulae, based on empirical data, were then compared in terms of IOL power errors that arose in the same schematic eye variants. RESULTS: An inverse relationship exists between theoretical ratio and axial length. Corneal power and initial lens power have little effect on calculated ratios, whilst final IOL position has a significant impact. None of the three empirically derived formulae are universally accurate but each is able to predict IOL power precisely in certain theoretical scenarios. The formulae derived by Ianchulev et al. and Leccisotti are most accurate for posterior IOL positions, whereas the Mackool et al. formula is most reliable when the IOL is located more anteriorly. CONCLUSION: Final IOL position was found to be the chief determinant of IOL power errors. Although the A-constants of IOLs are known and may be accurate, a variety of factors can still influence the final IOL position and lead to undesirable refractive errors. Optimum results using these novel formulae would be achieved in myopic eyes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal. © 2013 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to be determined the network capacity (number of necessary internal switching lines) based on detailed users’ behaviour and demanded quality of service parameters in an overall telecommunication system. We consider detailed conceptual and its corresponded analytical traffic model of telecommunication system with (virtual) circuit switching, in stationary state with generalized input flow, repeated calls, limited number of homogeneous terminals and losses due to abandoned and interrupted dialing, blocked and interrupted switching, not available intent terminal, blocked and abandoned ringing (absent called user) and abandoned conversation. We propose an analytical - numerical solution for finding the number of internal switching lines and values of the some basic traffic parameters as a function of telecommunication system state. These parameters are requisite for maintenance demand level of network quality of service (QoS). Dependencies, based on the numericalanalytical results are shown graphically. For proposed conceptual and its corresponding analytical model a network dimensioning task (NDT) is formulated, solvability of the NDT and the necessary conditions for analytical solution are researched as well. It is proposed a rule (algorithm) and computer program for calculation of the corresponded number of the internal switching lines, as well as corresponded values of traffic parameters, making the management of QoS easily.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report a near-ideal in-fiber polarizer implemented by use of 45° tilted fiber Bragg grating structures that are UV inscribed in hydrogenated Ge-doped fiber. We demonstrate a polarization-extinction ratio of 33 dB over a 100-nm operation range near 1550 nm. We further show an achievement of 99.5% degree of polarization for unpolarized light with these gratings. We also theoretically investigate tilted grating structures based on the Green's function calculation, therein revealing the unique polarization characteristics, which are in excellent agreement with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.