945 resultados para Advantages and disadvantages
Resumo:
Currently, most operational forecasting models use latitude-longitude grids, whose convergence of meridians towards the poles limits parallel scaling. Quasi-uniform grids might avoid this limitation. Thuburn et al, JCP, 2009 and Ringler et al, JCP, 2010 have developed a method for arbitrarily-structured, orthogonal C-grids (TRiSK), which has many of the desirable properties of the C-grid on latitude-longitude grids but which works on a variety of quasi-uniform grids. Here, five quasi-uniform, orthogonal grids of the sphere are investigated using TRiSK to solve the shallow-water equations. We demonstrate some of the advantages and disadvantages of the hexagonal and triangular icosahedra, a Voronoi-ised cubed sphere, a Voronoi-ised skipped latitude-longitude grid and a grid of kites in comparison to a full latitude-longitude grid. We will show that the hexagonal-icosahedron gives the most accurate results (for least computational cost). All of the grids suffer from spurious computational modes; this is especially true of the kite grid, despite it having exactly twice as many velocity degrees of freedom as height degrees of freedom. However, the computational modes are easiest to control on the hexagonal icosahedron since they consist of vorticity oscillations on the dual grid which can be controlled using a diffusive advection scheme for potential vorticity.
Resumo:
Warfarin resistance was first discovered among Norway rat (Rattus norvegicus) populations in Scotland in 1958 and further reports of resistance, both in this species and in others, soon followed from other parts of Europe and the United States. Researchers quickly defined the practical impact of these resistance phenomena and developed robust methods by which to monitor their spread. These tasks were relatively simple because of the high degree of immunity to warfarin conferred by the resistance genes. Later, the second generation anticoagulants were introduced to control rodents resistant to the warfarin-like compounds, but resistance to difenacoum, bromadiolone and brodifacoum is now reported in certain localities in Europe and elsewhere. However, the adoption of test methods designed initially for use with the first generation compounds to identify resistance to compounds of the second generation has led to some practical difficulties in conducting tests and in establishing meaningful resistance baselines. In particular, the results of certain test methodologies are difficult to interpret in terms of the likely impact on practical control treatments of the resistance phenomena they seek to identify. This paper defines rodenticide resistance in the context of both first and second generation anticoagulants. It examines the advantages and disadvantages of existing laboratory and field methods used in the detection of rodent populations resistant to anticoagulants and proposes some improvements in the application of these techniques and in the interpretation of their results.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Wireless Senor Networks(WSNs) detect events using one or more sensors, then collect data from detected events using these sensors. This data is aggregated and forwarded to a base station(sink) through wireless communication to provide the required operations. Different kinds of MAC and routing protocols need to be designed for WSN in order to guarantee data delivery from the source nodes to the sink. Some of the proposed MAC protocols for WSN with their techniques, advantages and disadvantages in the terms of their suitability for real time applications are discussed in this paper. We have concluded that most of these protocols can not be applied to real time applications without improvement
Resumo:
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tesselations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Resumo:
Recruitment of patients to a clinical trial usually occurs over a period of time, resulting in the steady accumulation of data throughout the trial's duration. Yet, according to traditional statistical methods, the sample size of the trial should be determined in advance, and data collected on all subjects before analysis proceeds. For ethical and economic reasons, the technique of sequential testing has been developed to enable the examination of data at a series of interim analyses. The aim is to stop recruitment to the study as soon as there is sufficient evidence to reach a firm conclusion. In this paper we present the advantages and disadvantages of conducting interim analyses in phase III clinical trials, together with the key steps to enable the successful implementation of sequential methods in this setting. Examples are given of completed trials, which have been carried out sequentially, and references to relevant literature and software are provided.
Resumo:
Spectroscopic catalogues, such as GEISA and HITRAN, do not yet include information on the water vapour continuum that pervades visible, infrared and microwave spectral regions. This is partly because, in some spectral regions, there are rather few laboratory measurements in conditions close to those in the Earth’s atmosphere; hence understanding of the characteristics of the continuum absorption is still emerging. This is particularly so in the near-infrared and visible, where there has been renewed interest and activity in recent years. In this paper we present a critical review focusing on recent laboratory measurements in two near-infrared window regions (centred on 4700 and 6300 cm−1) and include reference to the window centred on 2600 cm−1 where more measurements have been reported. The rather few available measurements, have used Fourier transform spectroscopy (FTS), cavity ring down spectroscopy, optical-feedback – cavity enhanced laser spectroscopy and, in very narrow regions, calorimetric interferometry. These systems have different advantages and disadvantages. Fourier Transform Spectroscopy can measure the continuum across both these and neighbouring windows; by contrast, the cavity laser techniques are limited to fewer wavenumbers, but have a much higher inherent sensitivity. The available results present a diverse view of the characteristics of continuum absorption, with differences in continuum strength exceeding a factor of 10 in the cores of these windows. In individual windows, the temperature dependence of the water vapour self-continuum differs significantly in the few sets of measurements that allow an analysis. The available data also indicate that the temperature dependence differs significantly between different near-infrared windows. These pioneering measurements provide an impetus for further measurements. Improvements and/or extensions in existing techniques would aid progress to a full characterisation of the continuum – as an example, we report pilot measurements of the water vapour self-continuum using a supercontinuum laser source coupled to an FTS. Such improvements, as well as additional measurements and analyses in other laboratories, would enable the inclusion of the water vapour continuum in future spectroscopic databases, and therefore allow for a more reliable forward modelling of the radiative properties of the atmosphere. It would also allow a more confident assessment of different theoretical descriptions of the underlying cause or causes of continuum absorption.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
This research is based on consumer complaints with respect to recently purchased consumer electronics. This research document will investigate the instances of development and device management as a tool used to aid consumer and manage consumer’s mobile products in order to resolve issues in or before the consumers is aware one exists. The problem at the present time is that mobile devices are becoming very advanced pieces of technology, and not all manufacturers and network providers have kept up the support element of End users. As such, the subject of the research is to investigate how device management could possibly be used as a method to promote research and development of mobile devices, and provide a better experience for the consumer. The wireless world is becoming increasingly complex as revenue opportunities are driven by new and innovative data services. We can no longer expect the customer to have the knowledge or ability to configure their own device. Device Management platforms can address the challenges of device configuration and support through new enabling technologies. Leveraging these technologies will allow a network operator to reduce the cost of subscriber ownership, drive increased ARPU (Average Revenue per User) by removing barriers to adoption, reduce churn by improving the customer experience and increase customer loyalty. DM technologies provide a flexible and powerful management method but are managing the same device features that have historically been configured manually through call centers or by the end user making changes directly on the device. For this reason DM technologies must be treated as part of a wider support solution. The traditional requirement for discovery, fault finding, troubleshooting and diagnosis are still as relevant with DM as they are in the current human support environment yet the current generation of solutions do little to address this problem. In the deployment of an effective Device Management solution the network operator must consider the integration of the DM platform, interfacing with many areas of the business, supported by knowledge of the relationship between devices, applications, solutions and services maintained on an ongoing basis. Complementing the DM solution with published device information, setup guides, training material and web based tools will ensure the quality of the customer experience, ensuring that problems are completely resolved, driving data usage by focusing customer education on the use of the wireless service In this way device management becomes a tool used both internally within the network or device vendor and by the customer themselves, with each user empowered to effectively manage the device without any prior knowledge or experience, confident that changes they apply will be relevant, accurate, stable and compatible. The value offered by an effective DM solution with an expert knowledge service will become a significant differentiator for the network operator in an ever competitive wireless market. This research document is intended to highlight some of the issues the industry faces as device management technologies become more prevalent, and offers some potential solutions to simplify the increasingly complex task of managing devices on the network, where device management can be used as a tool to aid customer relations and manage customer’s mobile products in order to resolve issues before the user is aware one exists. The research is broken down into the following, Customer Relationship Management, Device management, the role of knowledge with the DM, Companies that have successfully implemented device management, and the future of device management and CRM. And it also consists of questionnaires aimed at technical support agents and mobile device users. Interview was carried out with CRM managers within support centre to further the evidence gathered. To conclude, the document is to consider the advantages and disadvantages of device management and attempt to determine the influence it will have over customer support centre, and what methods could be used to implement it.
Resumo:
In this thesis the solar part of a large grid-connected photovoltaic system design has been done. The main purpose was to size and optimize the system and to present figures helping to evaluate the prospective project rationality, which can potentially be constructed on a contaminated area in Falun. The methodology consisted in PV market study and component selection, site analysis and defining suitable area for solar installation; and system configuration optimization based on PVsyst simulations and Levelized Cost of Energy calculations. The procedure was mainly divided on two parts, preliminary and detailed sizing. In the first part the objective was complex, which included the investigation of the most profitable component combination and system optimization due to tilt and row distance. It was done by simulating systems with different components and orientations, which were sized for the same 100kW inverter in order to make a fair comparison. For each simulated result a simplified LCOE calculation procedure was applied. The main results of this part show that with the price of 0.43 €/Wp thin-film modules were the most cost effective solution for the case with a great advantage over crystalline type in terms of financial attractiveness. From the results of the preliminary study it was possible to select the optimal system configuration, which was used in the detailed sizing as a starting point. In this part the PVsyst simulations were run, which included full scale system design considering near shadings created by factory buildings. Additionally, more complex procedure of LCOE calculation has been used here considered insurances, maintenance, time value of money and possible cost reduction due to the system size. Two system options were proposed in final results; both cover the same area of 66000 m2. The first one represents an ordinary South faced design with 1.1 MW nominal power, which was optimized for the highest performance. According to PVsyst simulations, this system should produce 1108 MWh/year with the initial investment of 835,000 € and 0.056 €/kWh LCOE. The second option has an alternative East-West orientation, which allows to cover 80% of occupied ground and consequently have 6.6 MW PV nominal power. The system produces 5388 MWh/year costs about 4500,000 € and delivers electricity with the same price of 0.056 €/kWh. Even though the EW solution has 20% lower specific energy production, it benefits mainly from lower relative costs for inverters, mounting and annual maintenance expenses. After analyzing the performance results, among the two alternatives none of the systems showed a clear superiority so there was no optimal system proposed. Both, South and East-West solutions have own advantages and disadvantages in terms of energy production profile, configuration, installation and maintenance. Furthermore, the uncertainty due to cost figures assumptions restricted the results veracity.
Resumo:
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
Resumo:
A stir bar sorptive extraction with liquid desorption followed by large volume injection coupled to gas chromatography–quadrupole mass spectrometry (SBSE-LD/LVI-GC–qMS) was evaluated for the simultaneous determination of higher alcohol acetates (HAA), isoamyl esters (IsoE) and ethyl esters (EE) of fatty acids. The method performance was assessed and compared with other solventless technique, the solid-phase microextraction (SPME) in headspace mode (HS). For both techniques, influential experimental parameters were optimised to provide sensitive and robust methods. The SBSE-LD/LVI methodology was previously optimised in terms of extraction time, influence of ethanol in the matrix, liquid desorption (LD) conditions and instrumental settings. Higher extraction efficiency was obtained using 60 min of extraction time, 10% ethanol content, n-pentane as desorption solvent, 15 min for the back-extraction period, 10 mL min−1 for the solvent vent flow rate and 10 °C for the inlet temperature. For HS-SPME, the fibre coated with 50/30 μm divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) afforded highest extraction efficiency, providing the best sensitivity for the target volatiles, particularly when the samples were extracted at 25 °C for 60 min under continuous stirring in the presence of sodium chloride (10% (w/v)). Both methodologies showed good linearity over the concentration range tested, with correlation coefficients higher than 0.984 for HS-SPME and 0.982 for SBES-LD approach, for all analytes. A good reproducibility was attained and low detection limits were achieved using both SBSE-LD (0.03–28.96 μg L−1) and HS-SPME (0.02–20.29 μg L−1) methodologies. The quantification limits for SBSE-LD approach ranging from 0.11 to 96.56 μg L−and from 0.06 to 67.63 μg L−1 for HS-SPME. Using the HS-SPME approach an average recovery of about 70% was obtained whilst by using SBSE-LD obtained average recovery were close to 80%. The analytical and procedural advantages and disadvantages of these two methods have been compared. Both analytical methods were used to determine the HAA, IsoE and EE fatty acids content in “Terras Madeirenses” table wines. A total of 16 esters were identified and quantified from the wine extracts by HS-SPME whereas by SBSE-LD technique were found 25 esters which include 2 higher alcohol acetates, 4 isoamyl esters and 19 ethyl esters of fatty acids. Generally SBSE-LD provided higher sensitivity with decreased analysis time.
Resumo:
Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.
Pharyngeal clearance and pharyngeal transit time determined by a biomagnetic method in normal humans
Resumo:
Clearance and transit time are parameters of great value in studies of digestive transit. Such parameters are nowadays obtained by means of scintigraphy and videofluoroscopy, with each technique having advantages and disadvantages. In this study we present a new, noninvasive method to study swallowing pharyngeal clearance (PC) and pharyngeal transit time (PTT). This new method is based on variations of magnetic flux produced by a magnetic bolus passing through the pharynx and detected by an AC biosusceptometer (ACB). These measurements may be performed in a simple way. cause no discomfort. and do not use radiation. We measured PC in 8 volunteers (7 males and I female. 23-33 years old) and PTT in 8 other volunteers (7 males and I female. 21-29 years old). PC was 0.82 +/- 0.10 s (mean +/- SD) and PTT was 0.75 +/- 0.03 s. The results were similar for PC but longer for PTT than those determined by means of other techniques. We conclude that the biomagnetic method can be used to evaluate PC and PTT.
Resumo:
Previous analyses of mitochondrial (mt)DNA and allozymes covering the range of the Iberian endemic golden-striped salamander, Chioglossa lusitanica, suggested a Pleistocene split of the historical species distribution into two population units (north and south of the Mondego river), postglacial expansion into the northernmost extant range, and secondary contact with neutral diffusion of genes close to the Mondego river. We extended analysis of molecular variation over the species range using seven microsatellite loci and the nuclear P-fibrinogen intron 7 (beta-fibint7). Both microsatellites and beta-fibint7 showed moderate to high levels of population structure, concordant with patterns detected with mtDNA and allozymes; and a general pattern of isolation-by-distance, contrasting the marked differentiation of two population groups suggested by mtDNA and allozymes. Bayesian multilocus analyses showed contrasting results as populations north and south of the Douro river were clearly differentiated based on microsatellites, whereas allozymes revealed differentiation north and south of the Mondego river. Additionally, decreased microsatellite variability in the north supported the hypothesis of postglacial colonization of this region. The well-documented evolutionary history of C. lusitanica, provides an excellent framework within which the advantages and limitations of different classes of markers can be evaluated in defining patterns of population substructure and inferring evolutionary processes across distinct spatio-temporal scales. The present study serves as a cautionary note for investigations that rely on a single type of molecular marker, especially when the organism under study exhibits a widespread distribution and complex natural history. (C) 2008 The Linnean Society of London, Biological Journal of the Linnean Society, 2008, 95, 371-387.