939 resultados para SYSTEMS ANALYSIS
Resumo:
The main aims of the present study are simultaneously to relate the brazing parameters with: (i) the correspondent interfacial microstructure, (ii) the resultant mechanical properties and (iii) the electrochemical degradation behaviour of AISI 316 stainless steel/alumina brazed joints. Filler metals on such as Ag–26.5Cu–3Ti and Ag–34.5Cu–1.5Ti were used to produce the joints. Three different brazing temperatures (850, 900 and 950 °C), keeping a constant holding time of 20 min, were tested. The objective was to understand the influence of the brazing temperature on the final microstructure and properties of the joints. The mechanical properties of the metal/ceramic (M/C) joints were assessed from bond strength tests carried out using a shear solicitation loading scheme. The fracture surfaces were studied both morphologically and structurally using scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS) and X-ray diffraction analysis (XRD). The degradation behaviour of the M/C joints was assessed by means of electrochemical techniques. It was found that using a Ag–26.5Cu–3Ti brazing alloy and a brazing temperature of 850 °C, produces the best results in terms of bond strength, 234 ± 18 MPa. The mechanical properties obtained could be explained on the basis of the different compounds identified on the fracture surfaces by XRD. On the other hand, the use of the Ag–34.5Cu–1.5Ti brazing alloy and a brazing temperature of 850 °C produces the best results in terms of corrosion rates (lower corrosion current density), 0.76 ± 0.21 μA cm−2. Nevertheless, the joints produced at 850 °C using a Ag–26.5Cu–3Ti brazing alloy present the best compromise between mechanical properties and degradation behaviour, 234 ± 18 MPa and 1.26 ± 0.58 μA cm−2, respectively. The role of Ti diffusion is fundamental in terms of the final value achieved for the M/C bond strength. On the contrary, the Ag and Cu distribution along the brazed interface seem to play the most relevant role in the metal/ceramic joints electrochemical performance.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
This paper studies the impact of energy and stock markets upon electricity markets using Multidimensional Scaling (MDS). Historical values from major energy, stock and electricity markets are adopted. To analyze the data several graphs produced by MDS are presented and discussed. This method is useful to have a deeper insight into the behavior and the correlation of the markets. The results may also guide the construction models, helping electricity markets agents hedging against Market Clearing Price (MCP) volatility and, simultaneously, to achieve better financial results.
Resumo:
This article presents a dynamical analysis of several traffic phenomena, applying a new modelling formalism based on the embedding of statistics and Laplace transform. The new dynamic description integrates the concepts of fractional calculus leading to a more natural treatment of the continuum of the Transfer Function parameters intrinsic in this system. The results using system theory tools point out that it is possible to study traffic systems, taking advantage of the knowledge gathered with automatic control algorithms. Dynamics, Games and Science I Dynamics, Games and Science I Look Inside Other actions Export citation About this Book Reprints and Permissions Add to Papers Share Share this content on Facebook Share this content on Twitter Share this content on LinkedIn
Resumo:
Though the formal mathematical idea of introducing noninteger order derivatives can be traced from the 17th century in a letter by L’Hospital in which he asked Leibniz what the meaning of D n y if n = 1/2 would be in 1695 [1], it was better outlined only in the 19th century [2, 3, 4]. Due to the lack of clear physical interpretation their first applications in physics appeared only later, in the 20th century, in connection with visco-elastic phenomena [5, 6]. The topic later obtained quite general attention [7, 8, 9], and also found new applications in material science [10], analysis of earth-quake signals [11], control of robots [12], and in the description of diffusion [13], etc.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
This study was developed with the purpose to investigate the effect of polysaccharide/plasticiser concentration on the microstructure and molecular dynamics of polymeric film systems, using transmission electron microscope imaging (TEM) and nuclear magnetic resonance (NMR) techniques. Experiments were carried out in chitosan/glycerol films prepared with solutions of different composition. The films obtained after drying and equilibration were characterised in terms of composition, thickness and water activity. Results show that glycerol quantities used in film forming solutions were responsible for films composition; while polymer/total plasticiser ratio in the solution determined the thickness (and thus structure) of the films. These results were confirmed by TEM. NMR allowed understanding the films molecular rearrangement. Two different behaviours for the two components analysed, water and glycerol were observed: the first is predominantly moving free in the matrix, while glycerol is mainly bounded to the chitosan chain. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
ABSTRACT OBJECTIVE To describe the spatial distribution of avoidable hospitalizations due to tuberculosis in the municipality of Ribeirao Preto, SP, Brazil, and to identify spatial and space-time clusters for the risk of occurrence of these events. METHODS This is a descriptive, ecological study that considered the hospitalizations records of the Hospital Information System of residents of Ribeirao Preto, SP, Southeastern Brazil, from 2006 to 2012. Only the cases with recorded addresses were considered for the spatial analyses, and they were also geocoded. We resorted to Kernel density estimation to identify the densest areas, local empirical Bayes rate as the method for smoothing the incidence rates of hospital admissions, and scan statistic for identifying clusters of risk. Softwares ArcGis 10.2, TerraView 4.2.2, and SaTScanTM were used in the analysis. RESULTS We identified 169 hospitalizations due to tuberculosis. Most were of men (n = 134; 79.2%), averagely aged 48 years (SD = 16.2). The predominant clinical form was the pulmonary one, which was confirmed through a microscopic examination of expectorated sputum (n = 66; 39.0%). We geocoded 159 cases (94.0%). We observed a non-random spatial distribution of avoidable hospitalizations due to tuberculosis concentrated in the northern and western regions of the municipality. Through the scan statistic, three spatial clusters for risk of hospitalizations due to tuberculosis were identified, one of them in the northern region of the municipality (relative risk [RR] = 3.4; 95%CI 2.7–4,4); the second in the central region, where there is a prison unit (RR = 28.6; 95%CI 22.4–36.6); and the last one in the southern region, and area of protection for hospitalizations (RR = 0.2; 95%CI 0.2–0.3). We did not identify any space-time clusters. CONCLUSIONS The investigation showed priority areas for the control and surveillance of tuberculosis, as well as the profile of the affected population, which shows important aspects to be considered in terms of management and organization of health care services targeting effectiveness in primary health care.
Resumo:
When considering time series data of variables describing agent interactions in social neurobiological systems, measures of regularity can provide a global understanding of such system behaviors. Approximate entropy (ApEn) was introduced as a nonlinear measure to assess the complexity of a system behavior by quantifying the regularity of the generated time series. However, ApEn is not reliable when assessing and comparing the regularity of data series with short or inconsistent lengths, which often occur in studies of social neurobiological systems, particularly in dyadic human movement systems. Here, the authors present two normalized, nonmodified measures of regularity derived from the original ApEn, which are less dependent on time series length. The validity of the suggested measures was tested in well-established series (random and sine) prior to their empirical application, describing the dyadic behavior of athletes in team games. The authors consider one of the ApEn normalized measures to generate the 95th percentile envelopes that can be used to test whether a particular social neurobiological system is highly complex (i.e., generates highly unpredictable time series). Results demonstrated that suggested measures may be considered as valid instruments for measuring and comparing complexity in systems that produce time series with inconsistent lengths.
Resumo:
This work extends a recent comparative study covering four different courses lectured at the Polytechnic of Porto - School of Engineering, in respect to the usage of a particular Learning Management System, i.e. Moodle, and its impact on students' results. A fifth course, which includes a number of resources especially supporting laboratory classes, is now added to the analysis. This particular course includes a number of remote experiments, made available through VISIR (Virtual Instrument Systems in Reality) and directly accessible through links included in the Moodle course page. We have analyzed the students' behavior in following these links and in effectively running experiments in VISIR (and also using other lab related resources, in Moodle). This data have been correlated with students' classifications in the lab component and in the exam, each one weighting 50% of their final marks. We aimed to compare students' performance in a richly Moodle-supported environment (with lab component) and in a poorly Moodle-supported environment (with only theoretical component). This question followed from conclusions drawn in the above referred comparative study, where it was shown that even though a positive correlation factor existed between the number of Moodle accesses and the final exam grade obtained by each student, its explanation behind was not straightforward, as the quality of the resources was preponderant over its quantity.
Resumo:
An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.
Resumo:
The increasing integration of wind energy in power systems can be responsible for the occurrence of over-generation, especially during the off-peak periods. This paper presents a dedicated methodology to identify and quantify the occurrence of this over-generation and to evaluate some of the solutions that can be adopted to mitigate this problem. The methodology is applied to the Portuguese power system, in which the wind energy is expected to represent more than 25% of the installed capacity in a near future. The results show that the pumped-hydro units will not provide enough energy storage capacity and, therefore, wind curtailments are expected to occur in the Portuguese system. Additional energy storage devices can be implemented to offset the wind energy curtailments. However, the investment analysis performed show that they are not economically viable, due to the present high capital costs involved.
Resumo:
Thesis for the Degree of Master of Science in Biotechnology Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia