956 resultados para Mathematical Techniques--Error Analysis
Resumo:
This article is the second part of a review of the historical evolution of mathematical models applied in the development of building technology. The first part described the current state of the art and contrasted various models with regard to the applications to conventional buildings and intelligent buildings. It concluded that mathematical techniques adopted in neural networks, expert systems, fuzzy logic and genetic models, that can be used to address model uncertainty, are well suited for modelling intelligent buildings. Despite the progress, the possible future development of intelligent buildings based on the current trends implies some potential limitations of these models. This paper attempts to uncover the fundamental limitations inherent in these models and provides some insights into future modelling directions, with special focus on the techniques of semiotics and chaos. Finally, by demonstrating an example of an intelligent building system with the mathematical models that have been developed for such a system, this review addresses the influences of mathematical models as a potential aid in developing intelligent buildings and perhaps even more advanced buildings for the future.
Resumo:
Straightforward mathematical techniques are used innovatively to form a coherent theoretical system to deal with chemical equilibrium problems. For a systematic theory it is necessary to establish a system to connect different concepts. This paper shows the usefulness and consistence of the system by applications of the theorems introduced previously. Some theorems are shown somewhat unexpectedly to be mathematically correlated and relationships are obtained in a coherent manner. It has been shown that theorem 1 plays an important part in interconnecting most of the theorems. The usefulness of theorem 2 is illustrated by proving it to be consistent with theorem 3. A set of uniform mathematical expressions are associated with theorem 3. A variety of mathematical techniques based on theorems 1–3 are shown to establish the direction of equilibrium shift. The equilibrium properties expressed in initial and equilibrium conditions are shown to be connected via theorem 5. Theorem 6 is connected with theorem 4 through the mathematical representation of theorem 1.
Resumo:
A new record of sea surface temperature (SST) for climate applications is described. This record provides independent corroboration of global variations estimated from SST measurements made in situ. Infrared imagery from Along-Track Scanning Radiometers (ATSRs) is used to create a 20 year time series of SST at 0.1° latitude-longitude resolution, in the ATSR Reprocessing for Climate (ARC) project. A very high degree of independence of in situ measurements is achieved via physics-based techniques. Skin SST and SST estimated for 20 cm depth are provided, with grid cell uncertainty estimates. Comparison with in situ data sets establishes that ARC SSTs generally have bias of order 0.1 K or smaller. The precision of the ARC SSTs is 0.14 K during 2003 to 2009, from three-way error analysis. Over the period 1994 to 2010, ARC SSTs are stable, with better than 95% confidence, to within 0.005 K yr−1(demonstrated for tropical regions). The data set appears useful for cleanly quantifying interannual variability in SST and major SST anomalies. The ARC SST global anomaly time series is compared to the in situ-based Hadley Centre SST data set version 3 (HadSST3). Within known uncertainties in bias adjustments applied to in situ measurements, the independent ARC record and HadSST3 present the same variations in global marine temperature since 1996. Since the in situ observing system evolved significantly in its mix of measurement platforms and techniques over this period, ARC SSTs provide an important corroboration that HadSST3 accurately represents recent variability and change in this essential climate variable.
Resumo:
Smart meters are becoming more ubiquitous as governments aim to reduce the risks to the energy supply as the world moves toward a low carbon economy. The data they provide could create a wealth of information to better understand customer behaviour. However at the household, and even the low voltage (LV) substation level, energy demand is extremely volatile, irregular and noisy compared to the demand at the high voltage (HV) substation level. Novel analytical methods will be required in order to optimise the use of household level data. In this paper we briefly outline some mathematical techniques which will play a key role in better understanding the customer's behaviour and create solutions for supporting the network at the LV substation level.
Resumo:
We study the mutual interaction between the dark sectors (dark matter and dark energy) of the Universe by resorting to the extended thermodynamics of irreversible processes and constrain the former with supernova type Ia data. As a by-product, the present dark matter temperature results are not extremely small and can meet the independent estimate of the temperature of the gas of sterile neutrinos.
Resumo:
Complex networks have been increasingly used in text analysis, including in connection with natural language processing tools, as important text features appear to be captured by the topology and dynamics of the networks. Following previous works that apply complex networks concepts to text quality measurement, summary evaluation, and author characterization, we now focus on machine translation (MT). In this paper we assess the possible representation of texts as complex networks to evaluate cross-linguistic issues inherent in manual and machine translation. We show that different quality translations generated by NIT tools can be distinguished from their manual counterparts by means of metrics such as in-(ID) and out-degrees (OD), clustering coefficient (CC), and shortest paths (SP). For instance, we demonstrate that the average OD in networks of automatic translations consistently exceeds the values obtained for manual ones, and that the CC values of source texts are not preserved for manual translations, but are for good automatic translations. This probably reflects the text rearrangements humans perform during manual translation. We envisage that such findings could lead to better NIT tools and automatic evaluation metrics.
Resumo:
The planar circuits are structures that increasingly attracting the attention of researchers, due the good performance and capacity to integrate with other devices, in the prototyping of systems for transmitting and receiving signals in the microwave range. In this context, the study and development of new techniques for analysis of these devices have significantly contributed in the design of structures with excellent performance and high reliability. In this work, the full-wave method based on the concept of electromagnetic waves and the principle of reflection and transmission of waves at an interface, Wave Concept Iterative Procedure (WCIP), or iterative method of waves is described as a tool with high precision study microwave planar circuits. The proposed method is applied to the characterization of planar filters, microstrip antennas and frequency selective surfaces. Prototype devices were built and the experimental results confirmed the proposed mathematical model. The results were also compared with simulated results by Ansoft HFSS, observing a good agreement between them.
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
We give a multidimensional extension of a one-dimensional integral inequality due to F. Carlson. The extension presented here involves Lp spaces with mixed norms in a very natural way. © 1984.
Resumo:
This work considers a problem of interest in several technological applications such as the thermal control of electronic equipment. It is also important to study the heat transfer performance of these components under off-normal conditions, such as during failure of cooling fans. The effect of natural convection on the flow and heat transfer in a cavity with two flush mounted heat sources on the left vertical wall, simulating electronic components, is studied numerically and experimentally. The influence of the power distribution, spacing between the heat sources and cavity aspect ratio have been investigated. An analysis of the average Nusselt number of the two heat sources was performed to investigate the behavior of the heat transfer coefficients. The results obtained numerically and experimentally, after an error analysis, showed a good agreement.
Resumo:
In this article, an implementation of structural health monitoring process automation based on vibration measurements is proposed. The work presents an alternative approach which intent is to exploit the capability of model updating techniques associated to neural networks to be used in a process of automation of fault detection. The updating procedure supplies a reliable model which permits to simulate any damage condition in order to establish direct correlation between faults and deviation in the response of the model. The ability of the neural networks to recognize, at known signature, changes in the actual data of a model in real time are explored to investigate changes of the actual operation conditions of the system. The learning of the network is performed using a compressed spectrum signal created for each specific type of fault. Different fault conditions for a frame structure are evaluated using simulated data as well as measured experimental data.
Resumo:
Smart material technology has become an area of increasing interest for the development of lighter and stronger structures which are able to incorporate actuator and sensor capabilities for collocated control. In the design of actively controlled structures, the determination of the actuator locations and the controller gains, is a very important issue. For that purpose, smart material modelling, modal analysis methods, control and optimization techniques are the most important ingredients to be taken into account. The optimization problem to be solved in this context presents two interdependent aspects. The first one is related to the discrete optimal actuator location selection problem, which is solved in this paper using genetic algorithms. The second is represented by a continuous variable optimization problem, through which the control gains are determined using classical techniques. A cantilever Euler-Bernoulli beam is used to illustrate the presented methodology.
Resumo:
The collapse of trapped Boson-Einstein condensate (BEC) of atoms in states 1 and 2 was studied. When the interaction among the atoms in state i was attractive the component i of the condensate experienced collapse. When the interaction between an atom in state 1 and state 2 was attractive both components experienced collapse. The time-dependant Gross-Pitaevski (GP) equation was used to study the time evolution of the collapse. There was an alternate growth and decay in the number of particles experiencing collapse.
Resumo:
This work shows a computational methodology for the determination of synchronous machines parameters using load rejection test data. By machine modeling one can obtain the quadrature parameters through a load rejection under an arbitrary reference, reducing the present difficulties. The proposed method is applied to a real machine.