948 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work it was performed a study to obtain parameters for an 1D regional velocity model for the Borborema Province, NE Brazil. It was used earthquakes occurred between 2001 and 2013 with magnitude greater than 2.9 mb either from epicentres determined from local seismic networks or by back azimuth determination, when possible. We chose seven events which occurred in the main seismic areas in the Borborema Province. The selected events were recorded in up to 74 seismic stations from the following networks: RSISNE, INCT-ET, João Câmara – RN, São Rafael – RN, Caruaru - PE, São Caetano - PE, Castanhão - CE, Santana do Acarau - CE, Taipu – RN e Sobral – CE, and the RCBR (IRIS/USGS—GSN). For the determination of the model parameters were inverted via a travel-time table and its fit. These model parameters were compared with other known model (global and regional) and have improved the epicentral determination. This final set of parameters model, we called MBB is laterally homogeneous with an upper crust at 11,45 km depth and total crustal thickness of 33,9 km. The P-wave velocity in the upper crust was estimated at 6.0 km/s and 6.64 km/s for it lower part. The P-wave velocity in the upper mantle we estimated at 8.21 km/s with an VP/VS ratio of approximately 1.74.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part of the work of an insurance company is to keep claims reserves, which is known as the technical reserves, in order to mitigate the risk inherent in their activities and comply with the legal obligations. There are several methods for estimate the claims reserves, deterministics and stochastics methods. One of the most used method is the deterministic method Chain Ladder, of simple application. However, the deterministics methods produce only point estimates, for which the stochastics methods have become increasingly popular because they are capable of producing interval estimates, measuring the variability inherent in the technical reserves. In this study the deterministics methods (Grossing Up, Link Ratio and Chain Ladder) and stochastics (Thomas Mack and Bootstrap associated with Overdispersed Poisson model) will be applied to estimate the claims reserves derived from automobile material damage occurred until December 2012. The data used in this research is based on a real database provided by AXA Portugal. The comparison of results obtained by different methods is hereby presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950 - December 2015) on a 0.5° x 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first collect an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 11) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2014) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biotic interactions can have large effects on species distributions yet their role in shaping species ranges is seldom explored due to historical difficulties in incorporating biotic factors into models without a priori knowledge on interspecific interactions. Improved SDMs, which account for biotic factors and do not require a priori knowledge on species interactions, are needed to fully understand species distributions. Here, we model the influence of abiotic and biotic factors on species distribution patterns and explore the robustness of distributions under future climate change. We fit hierarchical spatial models using Integrated Nested Laplace Approximation (INLA) for lagomorph species throughout Europe and test the predictive ability of models containing only abiotic factors against models containing abiotic and biotic factors. We account for residual spatial autocorrelation using a conditional autoregressive (CAR) model. Model outputs are used to estimate areas in which abiotic and biotic factors determine species’ ranges. INLA models containing both abiotic and biotic factors had substantially better predictive ability than models containing abiotic factors only, for all but one of the four species. In models containing abiotic and biotic factors, both appeared equally important as determinants of lagomorph ranges, but the influences were spatially heterogeneous. Parts of widespread lagomorph ranges highly influenced by biotic factors will be less robust to future changes in climate, whereas parts of more localised species ranges highly influenced by the environment may be less robust to future climate. SDMs that do not explicitly include biotic factors are potentially misleading and omit a very important source of variation. For the field of species distribution modelling to advance, biotic factors must be taken into account in order to improve the reliability of predicting species distribution patterns both presently and under future climate change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Although most gastrointestinal stromal tumours (GIST) carry oncogenic mutations in KIT exons 9, 11, 13 and 17, or in platelet-derived growth factor receptor alpha (PDGFRA) exons 12, 14 and 18, around 10% of GIST are free of these mutations. Genotyping and accurate detection of KIT/PDGFRA mutations in GIST are becoming increasingly useful for clinicians in the management of the disease. METHOD: To evaluate and improve laboratory practice in GIST mutation detection, we developed a mutational screening quality control program. Eleven laboratories were enrolled in this program and 50 DNA samples were analysed, each of them by four different laboratories, giving 200 mutational reports. RESULTS: In total, eight mutations were not detected by at least one laboratory. One false positive result was reported in one sample. Thus, the mean global rate of error with clinical implication based on 200 reports was 4.5%. Concerning specific polymorphisms detection, the rate varied from 0 to 100%, depending on the laboratory. The way mutations were reported was very heterogeneous, and some errors were detected. CONCLUSION: This study demonstrated that such a program was necessary for laboratories to improve the quality of the analysis, because an error rate of 4.5% may have clinical consequences for the patient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to study sedimentation mechanism by mathematical modeling in access channels which are affected by tidal currents. The most important factor for recognizing sedimentation process in every water environment is the flow pattern of that environment. It is noteworthy that the flow pattern is affected by the geometry and the shape of the environment as well as the type of existing affects in area. The area under the study in this thesis is located in Bushehr Gulf and the access channels (inner and outer). The study utilizes the hydrodynamic modeling with unstructured triangular and non-overlapping grids, using the finite volume, From method analysis in two scale sizes: large scale (200 m to 7.5km) and small scale (50m to 7.5km) in two different time durations of 15 days and 3.5 days to obtain the flow patterns. The 2D governing equations used in the model are the Depth-Averaged Shallow Water Equations. Turbulence Modeling is required to calculate the Eddy Viscosity Coefficient using the Smagorinsky Model with coefficient of 0.3. In addition to the flow modeling in two different scales and the use of the data of 3.5 day tidal current modeling have been considered to study the effects of the sediments equilibrium in the area and the channels. This model is capable of covering the area which is being settled and eroded and to identify the effects of tidal current of these processes. The required data of the above mentioned models such as current and sediments data have been obtained by the measurements in Bushehr Gulf and the access channels which was one of the PSO's (Port and Shipping Organization) project-titled, "The Sedimentation Modeling in Bushehr Port" in 1379. Hydrographic data have been obtained from Admiralty maps (2003) and Cartography Organization (1378, 1379). The results of the modeling includes: cross shore currents in northern and north western coasts of Bushehr Gulf during the neap tide and also the same current in northern and north eastern coasts of the Gulf during the spring tide. These currents wash and carry fine particles (silt, clay, and mud) from the coastal bed of which are generally made of mud and clay with some silts. In this regard, the role of sediments in the islands of this area and the islands made of depot of dredged sediments should not be ignored. The result of using 3.5 day modeling is that the cross channels currents leads to settlement places in inner and outer channels in tidal period. In neap tide the current enters the channel from upside bend of the two channels and outer channel. Then it crosses the channel oblique in some places of the outer channel. Also the oblique currents or even almost perpendicular current from up slope of inner channel between No. 15 and No. 18 buoys interact between the parallel currents in the channel and made secondary oblique currents which exit as a down-slope current in the channel and causes deposit of sediments as well as settling the suspended sediments carried by these currents. In addition in outer channel the speed of parallel currents in the bend of the channel which is naturally deeper increases. Therefore, it leads to erosion and suspension of sediments in this area. The speed of suspended sediments carried by this current which is parallel to the channel axis decreases when they pass through the shallower part of the channel where it is in the buoys No.7 and 8 to 5 and 6 are located. Therefore, the suspended sediment settles and because of this process these places will be even shallower. Furthermore, the passing of oblique upstream leads to settlement of the sediments in the up-slope and has an additional effect on the process of decreasing the depth of these locations. On the contrary, in the down-slope channel, as the results of sediments and current modeling indicates the speed of current increases and the currents make the particles of down-slope channel suspended and be carried away. Thus, in a vast area of downstream of both channels, the sediments have settled. At the end of the neap tide, the process along with circulations in this area produces eddies which causes sedimentation in the area. During spring some parts of this active location for sedimentation will enter both channels in a reverse process. The above mentioned processes and the places of sedimentation and erosion in inner and outer channels are validated by the sediments equilibrium modeling. This model will be able to estimate the suspended, bed load and the boundary layer thickness in each point of both channels and in the modeled area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless power transfer (WPT) and radio frequency (RF)-based energy har- vesting arouses a new wireless network paradigm termed as wireless powered com- munication network (WPCN), where some energy-constrained nodes are enabled to harvest energy from the RF signals transferred by other energy-sufficient nodes to support the communication operations in the network, which brings a promising approach for future energy-constrained wireless network design. In this paper, we focus on the optimal WPCN design. We consider a net- work composed of two communication groups, where the first group has sufficient power supply but no available bandwidth, and the second group has licensed band- width but very limited power to perform required information transmission. For such a system, we introduce the power and bandwidth cooperation between the two groups so that both group can accomplish their expected information delivering tasks. Multiple antennas are employed at the hybrid access point (H-AP) to en- hance both energy and information transfer efficiency and the cooperative relaying is employed to help the power-limited group to enhance its information transmission throughput. Compared with existing works, cooperative relaying, time assignment, power allocation, and energy beamforming are jointly designed in a single system. Firstly, we propose a cooperative transmission protocol for the considered system, where group 1 transmits some power to group 2 to help group 2 with information transmission and then group 2 gives some bandwidth to group 1 in return. Sec- ondly, to explore the information transmission performance limit of the system, we formulate two optimization problems to maximize the system weighted sum rate by jointly optimizing the time assignment, power allocation, and energy beamforming under two different power constraints, i.e., the fixed power constraint and the aver- age power constraint, respectively. In order to make the cooperation between the two groups meaningful and guarantee the quality of service (QoS) requirements of both groups, the minimal required data rates of the two groups are considered as constraints for the optimal system design. As both problems are non-convex and have no known solutions, we solve it by using proper variable substitutions and the semi-definite relaxation (SDR). We theoretically prove that our proposed solution method can guarantee to find the global optimal solution. Thirdly, consider that the WPCN has promising application potentials in future energy-constrained net- works, e.g., wireless sensor network (WSN), wireless body area network (WBAN) and Internet of Things (IoT), where the power consumption is very critical. We investigate the minimal power consumption optimal design for the considered co- operation WPCN. For this, we formulate an optimization problem to minimize the total consumed power by jointly optimizing the time assignment, power allocation, and energy beamforming under required data rate constraints. As the problem is also non-convex and has no known solutions, we solve it by using some variable substitutions and the SDR method. We also theoretically prove that our proposed solution method for the minimal power consumption design guarantees the global optimal solution. Extensive experimental results are provided to discuss the system performance behaviors, which provide some useful insights for future WPCN design. It shows that the average power constrained system achieves higher weighted sum rate than the fixed power constrained system. Besides, it also shows that in such a WPCN, relay should be placed closer to the multi-antenna H-AP to achieve higher weighted sum rate and consume lower total power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leishmaniasis, caused by Leishmania infantum, is a vector-borne zoonotic disease that is endemic to the Mediterranean basin. The potential of rabbits and hares to serve as competent reservoirs for the disease has recently been demonstrated, although assessment of the importance of their role on disease dynamics is hampered by the absence of quantitative knowledge on the accuracy of diagnostic techniques in these species. A Bayesian latent-class model was used here to estimate the sensitivity and specificity of the Immuno-fluorescence antibody test (IFAT) in serum and a Leishmania-nested PCR (Ln-PCR) in skin for samples collected from 217 rabbits and 70 hares from two different populations in the region of Madrid, Spain. A two-population model, assuming conditional independence between test results and incorporating prior information on the performance of the tests in other animal species obtained from the literature, was used. Two alternative cut-off values were assumed for the interpretation of the IFAT results: 1/50 for conservative and 1/25 for sensitive interpretation. Results suggest that sensitivity and specificity of the IFAT were around 70–80%, whereas the Ln-PCR was highly specific (96%) but had a limited sensitivity (28.9% applying the conservative interpretation and 21.3% with the sensitive one). Prevalence was higher in the rabbit population (50.5% and 72.6%, for the conservative and sensitive interpretation, respectively) than in hares (6.7% and 13.2%). Our results demonstrate that the IFAT may be a useful screening tool for diagnosis of leishmaniasis in rabbits and hares. These results will help to design and implement surveillance programmes in wild species, with the ultimate objective of early detecting and preventing incursions of the disease into domestic and human populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bigeye thresher, Alopias supercilious, is commonly caught as bycatch in pelagic longline fisheries targeting swordfish. Little information is yet available on the biology of this species, however. As part of an ongoing study, observers sent aboard fishing vessels have been collecting set of information that includes samples of vertebrae, with the aim of investigating age and growth of A. supercilious. A total of 117 specimens were sampled between September 2008 and October 2009 in the tropical northeastern Atlantic, with specimens ranging from 101 to 242 cm fork length (FL) (176 to 407 cm total length). The A. supercilious vertebrae were generally difficult to read, mainly because they were poorly calcified, which is typical of Lamniformes sharks. Preliminary trials were carried out to determine the most efficient band enhancement technique for this species, in which crystal violet section staining was found to be the best methodology. Estimated ages in this sample ranged from 2 to 22 years for females and 1 to 17 years for males. A version of the von Bertalanffy growth model (VBGF) re-parameterised to estimate L(0), and a modified VBGF using a fixed L(0) were fitted to the data. The Akaike information criterion (AIC) was used to compare these models. The VBGF produced the best results, with the following parameters: L(inf) = 293 cm FL, k = 0.06 y(-1) and L(0) = 111 cm FL for females; L(inf) = 206 cm FL, k = 0.18 y(-1) and L(0) = 93 cm FL for males. The estimated growth coefficients confirm that A. supercilious is a slow-growing species, highlighting its vulnerability to fishing pressure. It is therefore urgent to carry out more biological research to inform fishery managers more adequately and address conservation issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a conservation law perturbed by a linear diffusion and a general form of non-positive dispersion. We prove the convergence of the corresponding solution to the entropy weak solution of the hyperbolic conservation law.