927 resultados para Non-ionic surfactant. Cloud point. Flory-Huggins model. UNIQUAC model. NRTL model
Resumo:
Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.
Resumo:
An electrospray source has been developed using a novel new fluid that is both magnetic and conductive. Unlike conventional electrospray sources that required microfabricated structures to support the fluid to be electrosprayed, this new electrospray fluid utilizes the Rosensweig instability to create the structures in the magnetic fluid when an external magnetic field was applied. Application of an external electric field caused these magnetic fluid structures to spray. These fluid based structures were found to spray at a lower onset voltage than was predicted for electrospray sources with solid structures of similar geometry. These fluid based structures were also found to be resilient to damage, unlike the solid structures found in traditional electrospray sources. Further, experimental studies of magnetic fluids in non-uniform magnetic fields were conducted. The modes of Rosensweig instabilities have been studied in-depth when created by uniform magnetic fields, but little to no studies have been performed on Rosensweig instabilities formed due to non-uniform magnetic fields. The measured spacing of the cone-like structures of ferrofluid, in a non-uniform magnetic field, were found to agree with a proposed theoretical model.
Resumo:
This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
This thesis examines the importance of effective stakeholder engagement that complies with the doctrines of social justice in non-renewable resources management decision-making. It uses hydraulic fracturing in the Green Point Shale Formation in Western Newfoundland as a case study. The thesis uses as theoretical background John Rawls’ and David Miller’ theory of social justice, and identifies the social justice principles, which are relevant to stakeholder engagement. The thesis compares the method of stakeholder engagement employed by the Newfoundland and Labrador Hydraulic Fracturing Review Panel (NLHFRP), with the stakeholder engagement techniques recommended by the Structured Decision Making (SDM) model, as applied to a simulated case study involving hydraulic fracturing in the Green Point Shale Formation. Using the already identified social justice principles, the thesis then developed a framework to measure the level of compliance of both stakeholder engagement techniques with social justice principles. The main finding of the thesis is that the engagement techniques prescribed by the SDM model comply more closely with the doctrines of social justice than the engagement techniques applied by the NLHFRP. The thesis concludes by recommending that the SDM model be more widely used in non- renewable resource management decision making in order to ensure that all stakeholders’ concerns are effectively heard, understood and transparently incorporated in the nonrenewable resource policies to make them consistent with local priorities and goals, and with the social justice norms and institutions.
Resumo:
As the agricultural non-point source pollution(ANPSP) has become the most significant threat for water environmental deterioration and lake eutrophication in China, more and more scientists and technologists are focusing on the control countermeasure and pollution mechanism of agricultural non-point source pollution. The unreasonable rural production structure and limited scientific management measures are the main reasons for acute ANSPS problems in China. At present, the problem for pollution control is a lack of specific regulations, which affects the government's management efficiency. According to these characteristics and problems, this paper puts forward some corresponding policies. The status of the agricultural non-point source pollution of China is analyzed, and ANSPS prevention and control model is provided based on governance policy, environmental legislation, technical system and subsidy policy. At last, the case analysis of Qiandao Lake is given, and an economic policy is adopted based on its situation.
Resumo:
This thesis is part of the fields of Material Physics and Organic Electronics and aims to determine the charge carrier density and mobility in the hydrated conducting polymer–polyelectrolyte blend PEDOT:PSS. This kind of material combines electronic semiconductor functionality with selective ionic transport, biocompatibility and electrochemical stability in water. This advantageous material properties combination makes PEDOT:PSS a unique material to build organic electrochemical transistors (OECTs), which have relevant application as amplifying transducers for bioelectronic signals. In order to measure charge carrier density and mobility, an innovative 4-wire, contact independent characterization technique was introduced, the electrolyte-gated van der Pauw (EgVDP) method, which was combined with electrochemical impedance spectroscopy. The technique was applied to macroscopic thin film samples and micro-structured PEDOT:PSS thin film devices fabricated using photolithography. The EgVDP method revealed to be effective for the measurements of holes’ mobility in hydrated PEDOT:PSS thin films, which resulted to be <μ>=(0.67±0.02) cm^2/(V*s). By comparing this result with 2-point-probe measurements, we found that contact resistance effects led to a mobility overestimation in the latter. Ion accumulation at the drain contact creates a gate-dependent potential barrier and is discussed as a probable reason for the overestimation in 2-point-probe measurements. The measured charge transport properties of PEDOT:PSS were analyzed in the framework of an extended drift-diffusion model. The extended model fits well also to the non-linear response in the transport characterization and results suggest a Gaussian DOS for PEDOT:PSS. The PEDOT:PSS-electrolyte interface capacitance resulted to be voltage-independent, confirming the hypothesis of its morphological origin, related to the separation between the electronic (PEDOT) and ionic (PSS) phases in the blend.
Resumo:
This thesis project aims to the development of an algorithm for the obstacle detection and the interaction between the safety areas of an Automated Guided Vehicles (AGV) and a Point Cloud derived map inside the context of a CAD software. The first part of the project focuses on the implementation of an algorithm for the clipping of general polygons, with which has been possible to: construct the safety areas polygon, derive the sweep of this areas along the navigation path performing a union and detect the intersections with line or polygon representing the obstacles. The second part is about the construction of a map in terms of geometric entities (lines and polygons) starting from a point cloud given by the 3D scan of the environment. The point cloud is processed using: filters, clustering algorithms and concave/convex hull derived algorithms in order to extract line and polygon entities representing obstacles. Finally, the last part aims to use the a priori knowledge of possible obstacle detections on a given segment, to predict the behavior of the AGV and use this prediction to optimize the choice of the vehicle's assigned velocity in that segment, minimizing the travel time.
Resumo:
The design of a lateral line for drip irrigation requires accurate evaluation of head losses in not only the pipe but in the emitters as well. A procedure was developed to determine localized head losses within the emitters by the formulation of a mathematical model that accounts for the obstruction caused by the insertion point. These localized losses can be significant when compared with tire total head losses within the system due to the large number of emitters typically installed along the lateral line. Air experiment was carried out by altering flow characteristics to create Reynolds numbers (R) from 7,480 to 32,597 to provide turbulent flow and a maximum velocity of 2.0 m s(-1). The geometry of the emitter was determined by an optical projector and sensor An equation was formulated to facilitate the localized head loss calculation using the geometric characteristics of the emitter (emitter length, obstruction ratio, and contraction coefficient). The mathematical model was tested using laboratory measurements on four emitters. The local head loss was accurately estimated for the Uniram (difference of +13.6%) and Drip Net (difference of +7.7%) emitters, while appreciable deviations were found for the Twin Plus (-21.8%) and Tiran (+50%) emitters. The head loss estimated by the model was sensitive to the variations in the obstruction area of the emitter However, the variations in the local head loss did not result in significant variations in the maximum length of the lateral lines. In general, for all the analyzed emitters, a 50% increase in the local head loss for the emitters resulted in less than an 8% reduction in the maximum lateral length.
Resumo:
This work describes an easy synthesis (one pot) of MFe(2)O(4) (M = Co, Fe, Mn, and Ni) magnetic nanoparticles MNPs by the thermal decomposition of Fe(Acac)(3)/M(Acac)(2) by using BMI center dot NTf(2) (1-n-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide) or BMI center dot PF(6) (1-n-butyl-3-methylimidazolium hexafluorophosphate) ionic liquids (ILs) as recycling solvents and oleylamine as the reducing and surface modifier agent. The effects of reaction temperature and reaction time on the features of the magnetic nanomaterials (size and magnetic properties) were investigated. The growth of the MNPs is easily controlled in the IL by adjusting the reaction temperature and time, as inferred from Fe(3)O(4) MNPs obtained at 150 degrees C, 200 degrees C and 250 degrees C with mean diameters of 8, 10 and 15 nm, respectively. However, the thermal decomposition of Fe(Acac)(3) performed in a conventional high boiling point solvent (diphenyl ether, bp 259 degrees C), under a similar Fe to oleylamine molar ratio used in the IL synthesis, does not follow the same growth mechanism and rendered only smaller NPs of 5 nm mean diameter. All MNPs are covered by at least one monolayer of oleylamine making them readily dispersible in non-polar solvents. Besides the influence on the nanoparticles growth, which is important for the preparation of highly crystalline MNPs, the IL was easily recycled and has been used in at least 20 successive syntheses.
Resumo:
The thermo-solvatochromism of 2,6-dibromo-4-[(E)-2-(1-methylpyridinium-4-yl)ethenyl] phenolate, MePMBr(2), has been studied in mixtures of water, W, with ionic liquids, ILs, in the temperature range of 10 to 60 degrees C, where feasible. The objectives of the study were to test the applicability of a recently introduced solvation model, and to assess the relative importance of solute-solvent solvophobic interactions. The ILs were 1-allyl-3-alkylimidazolium chlorides, where the alkyl groups are methyl, 1-butyl, and 1-hexyl, respectively. The equilibrium constants for the interaction of W and the ILs were calculated from density data; they were found to be linearly dependent on N(C), the number of carbon atoms of the alkyl group; van't Hoff equation (log K versus 1/T) applied satisfactorily. Plots of the empirical solvent polarities, E(T) (MePMBr(2)) in kcal mol(-1), versus the mole fraction of water in the binary mixture, chi(w), showed non-linear, i.e., non-ideal behavior. The dependence of E(T) (MePMBr(2)) on chi(w), has been conveniently quantified in terms of solvation by W, IL, and the ""complex"" solvent IL-W. The non-ideal behavior is due to preferential solvation by the IL and, more efficiently, by IL-W. The deviation from linearity increases as a function of increasing N(C) of the IL, and is stronger than that observed for solvation of MePMBr(2) by aqueous 1-propanol, a solvent whose lipophilicity is 12.8 to 52.1 times larger than those of the ILs investigated. The dependence on N(C) is attributed to solute-solvent solvophobic interactions, whose relative contribution to solvation are presumably greater than that in mixtures of water and 1-propanol.