887 resultados para network cost models
Resumo:
In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid.
Resumo:
La Comisión Europea apoya, por varias vías, incorporar la perspectiva de género, ahora en el nuevo programa Horizonte 2020, pero también financiando proyectos como gendered innovations, que muestran cómo las innovaciones de género aportan calidad en investigación y en prácticas profesionales para la salud y el bienestar. Uno de sus instrumentos políticos es la Recomendación sobre Género, Ciencia e Innovación, a desarrollar en los Estados miembros. En este contexto se crea la red internacional de Género, Ciencia, Tecnología y Medio Ambiente (COST genderSTE), que pretende: 1) cambios estructurales de las instituciones, que promocionen a las investigadoras; 2) identificación de las dimensiones de género relevantes para el medio ambiente; y 3) promocionar una mejor integración de la perspectiva de género en investigación y tecnología. COST GenderSTE apoya la creación de redes y la difusión del conocimiento con perspectiva de género. Todas estas herramientas ofrecen la oportunidad de incorporar la perspectiva de género en investigación en Europa.
Resumo:
Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.
Resumo:
In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
The most straightforward European single energy market design would entail a European system operator regulated by a single European regulator. This would ensure the predictable development of rules for the entire EU, significantly reducing regulatory uncertainty for electricity sector investments. But such a first-best market design is unlikely to be politically realistic in the European context for three reasons. First, the necessary changes compared to the current situation are substantial and would produce significant redistributive effects. Second, a European solution would deprive member states of the ability to manage their energy systems nationally. And third, a single European solution might fall short of being well-tailored to consumers’ preferences, which differ substantially across the EU. To nevertheless reap significant benefits from an integrated European electricity market, we propose the following blueprint: First, we suggest adding a European system-management layer to complement national operation centres and help them to better exchange information about the status of the system, expected changes and planned modifications. The ultimate aim should be to transfer the day-to-day responsibility for the safe and economic operation of the system to the European control centre. To further increase efficiency, electricity prices should be allowed to differ between all network points between and within countries. This would enable throughput of electricity through national and international lines to be safely increased without any major investments in infrastructure. Second, to ensure the consistency of national network plans and to ensure that they contribute to providing the infrastructure for a functioning single market, the role of the European ten year network development plan (TYNDP) needs to be upgraded by obliging national regulators to only approve projects planned at European level unless they can prove that deviations are beneficial. This boosted role of the TYNDP would need to be underpinned by resolving the issues of conflicting interests and information asymmetry. Therefore, the network planning process should be opened to all affected stakeholders (generators, network owners and operators, consumers, residents and others) and enable the European Agency for the Cooperation of Energy Regulators (ACER) to act as a welfare-maximising referee. An ultimate political decision by the European Parliament on the entire plan will open a negotiation process around selecting alternatives and agreeing compensation. This ensures that all stakeholders have an interest in guaranteeing a certain degree of balance of interest in the earlier stages. In fact, transparent planning, early stakeholder involvement and democratic legitimisation are well suited for minimising as much as possible local opposition to new lines. Third, sharing the cost of network investments in Europe is a critical issue. One reason is that so far even the most sophisticated models have been unable to identify the individual long-term net benefit in an uncertain environment. A workable compromise to finance new network investments would consist of three components: (i) all easily attributable cost should be levied on the responsible party; (ii) all network users that sit at nodes that are expected to receive more imports through a line extension should be obliged to pay a share of the line extension cost through their network charges; (iii) the rest of the cost is socialised to all consumers. Such a cost-distribution scheme will involve some intra-European redistribution from the well-developed countries (infrastructure-wise) to those that are catching up. However, such a scheme would perform this redistribution in a much more efficient way than the Connecting Europe Facility’s ad-hoc disbursements to politically chosen projects, because it would provide the infrastructure that is really needed.
Resumo:
In this paper we estimate the impact of subsidies from the EU’s common agricultural policy on farm bank loans. According to the theoretical results, if subsidies are paid at the beginning of the growing season they may reduce bank loans, whereas if they are paid at the end of the season they increase bank loans, but these results are conditional on whether farms are credit constrained and on the relative cost of internal and external financing. In the empirical analysis, we use farm-level panel data from the Farm Accountancy Data Network to test the theoretical predictions for the period 1995–2007. We employ fixed-effects and generalised method of moment models to estimate the impact of subsidies on farm loans. The results suggest that subsidies influence farm loans and the effects tend to be non-linear and indirect. The results also indicate that both coupled and decoupled subsidies stimulate long-term loans, but the long-term loans of large farms increase more than those of small farms, owing to decoupled subsidies. Furthermore, the results imply that short-term loans are affected only by decoupled subsidies, and they are altered by decoupled subsidies more for small farms than for large farms; however, when controlling for endogeneity, only the decoupled payments affect loans and the relationship is non-linear.
Resumo:
Background: Diabetes is known as a major cause of morbidity and mortality worldwide. Portugal is known as the European country with the highest prevalence of this disease. While diabetes prevalence data is updated annually in Portugal, the General Practitioner’s (GP) Sentinel Network represents the only data source on diabetes incidence. This study describes the trends in Diabetes incidence, between 1992 and 2015, and estimate projections for the future incidence rates in Portugal until 2024. Methods: An ecological time-series study was conducted using data from GP Sentinel Network between 1992 and 2015. Family doctors reported all new cases of Diabetes in their patients’ lists. Annual trends were estimated through Poisson regression models as well as the future incidence rates (until 2024), sex and age group stratified. Incidence rate projections were adjusted to the distribution of the resident Portuguese population given Statistics Portugal projections. Results: The average increase in Diabetes incidence rate was in total 4.29% (CI95% 3.80–4.80) per year under study. Until 1998–2000, the annual incidence rate was higher in women, and from 1998–2000 to 2013–2015 turn out to be higher in men. The incidence rate projected for 2022–2024 was 972.77/105 inhabitants in total, and 846.74/105 and 1114.42/105, respectively, in women and men. Conclusions: This is the first study in Portugal to estimate diabetes incidence rate projections. The disturbing reported projections seem realistic if things continue as in the past. Actually, effective public health policies will need to be undertaken to minimize this alarming future scenario.
Resumo:
BACKGROUND Researchers evaluating angiomodulating compounds as a part of scientific projects or pre-clinical studies are often confronted with limitations of applied animal models. The rough and insufficient early-stage compound assessment without reliable quantification of the vascular response counts, at least partially, to the low transition rate to clinics. OBJECTIVE To establish an advanced, rapid and cost-effective angiogenesis assay for the precise and sensitive assessment of angiomodulating compounds using zebrafish caudal fin regeneration. It should provide information regarding the angiogenic mechanisms involved and should include qualitative and quantitative data of drug effects in a non-biased and time-efficient way. APPROACH & RESULTS Basic vascular parameters (total regenerated area, vascular projection area, contour length, vessel area density) were extracted from in vivo fluorescence microscopy images using a stereological approach. Skeletonization of the vasculature by our custom-made software Skelios provided additional parameters including "graph energy" and "distance to farthest node". The latter gave important insights into the complexity, connectivity and maturation status of the regenerating vascular network. The employment of a reference point (vascular parameters prior amputation) is unique for the model and crucial for a proper assessment. Additionally, the assay provides exceptional possibilities for correlative microscopy by combining in vivo-imaging and morphological investigation of the area of interest. The 3-way correlative microscopy links the dynamic changes in vivo with their structural substrate at the subcellular level. CONCLUSIONS The improved zebrafish fin regeneration model with advanced quantitative analysis and optional 3-way correlative morphology is a promising in vivo angiogenesis assay, well-suitable for basic research and preclinical investigations.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Orthotopic liver retransplantation (re-OLT) is highly controversial. The objectives of this study were to determine the validity of a recently developed United Network for Organ Sharing (UNOS) multivariate model using an independent cohort of patients undergoing re-OLT outside the United States, to determine whether incorporation of other variables that were incomplete in the UNOS registry would provide additional prognostic information, to develop new models combining data sets from both cohorts, and to evaluate the validity of the model for end-stage liver disease (MELD) in patients undergoing re-OLT. Two hundred eighty-one adult patients undergoing re-OLT (between 1986 and 1999) at 6 foreign transplant centers comprised the validation cohort. We found good agreement between actual survival and predicted survival in the validation cohort; 1-year patient survival rates in the low-, intermediate-, and high-risk groups (as assigned by the original UNOS model) were 72%, 68%, and 36%, respectively (P < .0001). In the patients for whom the international normalized ratio (INR) of prothrombin time was available, MELD correlated with outcome following re-OLT; the median MELD scores for patients surviving at least 90 days compared with those dying within 90 days were 20.75 versus 25.9, respectively (P = .004). Utilizing both patient cohorts (n = 979), a new model, based on recipient age, total serum bilirubin, creatinine, and interval to re-OLT, was constructed (whole model χ(2) = 105, P < .0001). Using the c-statistic with 30-day, 90-day, 1-year, and 3-year mortality as the end points, the area under the receiver operating characteristic (ROC) curves for 4 different models were compared. In conclusion, prospective validation and use of these models as adjuncts to clinical decision making in the management of patients being considered for re-OLT are warranted.