807 resultados para Network-based positioning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background International standard practice for the correct confirmation of the central venous access device is the chest X-ray. The intracavitary electrocardiogram-based insertion method is radiation-free, and allows real-time placement verification, providing immediate treatment and reduced requirement for post-procedural repositioning. Methods Relevant databases were searched for prospective randomised controlled trials (RCTs) or quasi RCTs that compared the effectiveness of electrocardiogram-guided catheter tip positioning with placement using surface-anatomy-guided insertion plus chest X-ray confirmation. The primary outcome was accurate catheter tip placement. Secondary outcomes included complications, patient satisfaction and costs. Results Five studies involving 729 participants were included. Electrocardiogram-guided insertion was more accurate than surface anatomy guided insertion (odds ratio: 8.3; 95% confidence interval (CI) 1.38; 50.07; p=0.02). There was a lack of reporting on complications, patient satisfaction and costs. Conclusion The evidence suggests that intracavitary electrocardiogram-based positioning is superior to surface-anatomy-guided positioning of central venous access devices, leading to significantly more successful placements. This technique could potentially remove the requirement for post-procedural chest X-ray, especially during peripherally inserted central catheter (PICC) line insertion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper studies the problem of selecting users in an online social network for targeted advertising so as to maximize the adoption of a given product. In previous work, two families of models have been considered to address this problem: direct targeting and network-based targeting. The former approach targets users with the highest propensity to adopt the product, while the latter approach targets users with the highest influence potential – that is users whose adoption is most likely to be followed by subsequent adoptions by peers. This paper proposes a hybrid approach that combines a notion of propensity and a notion of influence into a single utility function. We show that targeting a fixed number of high-utility users results in more adoptions than targeting either highly influential users or users with high propensity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study examines one case of students' experiences from the activity in a collaborative learning process in a networked learning environment, and explores whether or not the experiences explain the participation or lack of participation in the activities. As a research task the students' experiences in the database of the networked learning environment, participating in its construction, and the ways of working needed to build the database, were examined. To contrast the students' experiences, their actual participation in the building of the database was clarified. Based on actual participation, groups more active and more passive than average were separated, and their experiences were compared to each other. The research material was collected from the course Cognitive and Creative Processes, which was offered to studentsof the Department of Textile Teacher Education in University of Helsinki, and students of the Departments of Teacher Education Units giving textile education in Turku, Rauma and Savonlinna in the beginning of 2001. In this course, creativity was examined from a psychological and sosiocultural context with the aim of realizing a collaborative progressive inquiry process. The course was held in a network-based Future Learning Environment (Fle 2) except for the starting lecture and training the use of the learning environment. This study analyzed the learning diaries that the students had sent to the tutor once a week for four weeks, and the final thoughts written into the database of the learning environment. Content analysis was applied as the research method. The case was enriched from another point of view by examining the messages the students had written into the learning environment with the social network analysis. The theoretical base of the study looks at the research of computer-supported collaborative learning, the conceptions of learning as a process of participation and knowledge building, and the possibilities and limitations of network-based learning environments. The research results show, that both using the network-based learning environment and collaborative ways of studying were new to the students. The students were positively surprised by the feedback and support provided by the community. On the other hand, they also experienced problems with facelessness and managing the information in the learning environment. The active students seemed to be more ready for a progressive inquiry process. It can be seen from their attitudes and actions that they have strived to participate actively and invested into the process both from their own and the community's point of view. The more passive students reported their actions to get credits and they had a harder time of perceiving the thoughts presented in the net as common progression. When arranging similar courses in the future, attention should be paid to how to get the students to act in ways necessary for knowledge building, and different from more traditional ways of studying. The difficulties of students used to traditional studying methods to adapt to collaborative knowledge building were evident on the course Cognitive and Creative Processes. Keywords: computer supported collaborative learning, knowledge building, progressive Inquiry, participation

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tutkimuksen päätarkoitus oli hahmottaa verkko-opetuksen arvoja ja arviointia luokanopettajan näkökulmasta. Teoreettisena viitekehyksenä oli mediakasvatus ja didaktiikan käsitys opetus opiskelu oppimisprosessista. Didaktisena perustana käytettiin Kansasen ym. (2000) opettajan pedagogista ajattelua, Lahdeksen (1997) didaktiikan kehämallia sekä Uljensin (1997) opetus opiskelu oppimisprosessia. Lähtökohtana oli luokanopettajan arvioinnin kautta pohtia opetuksen, opiskelun ja oppimisen sekä tieto- ja viestintätekniikan yhtymistä koulukontekstissa. Tutkimuksen pääongelmat olivat: 1) Minkälainen oli luokanopettajan käsitys verkko-opetuksen toteuttamisesta? 2) Miten luokanopettaja arvioi verkko-opetusta? Myös ensimmäistä pääongelmaa lähestyttiin arvojen ja arvioinnin näkökulmasta. Menetelmänä käytettiin teemahaastattelua. Teemahaastattelun teemoja jäsennettiin Gallinin (2001) sosiokulttuurisen verkko-opetuksen arviointimallin sekä Tellan ja Mononen-Aaltosen (2001) mediakasvatuksen monitasomallin avulla. Aineistonkeruussa kahdeksan helsinkiläistä luokanopettajaa vastasivat lyhyeen kyselyyn, jonka jälkeen heitä haastateltiin välittömästi. Laadullisen aineiston analyysissä teemoiteltiin aineisto, ja tämän rinnalla rakennettiin aineistolähtöistä analyysiä. Näiden vuorovaikutuksesta syntyivät tutkimuksen johtopäätökset. Keskeisinä tuloksina tutkimuksessa nähtiin luokanopettajan verkko-opetukseen liittyvien arvojen kaksi eri näkökulmaa: (A) opetus opiskelu oppimismenetelmien ja -tulosten arvonäkökulma sekä (B) ns. kasvatuksellinen ja koulukulttuurin arvonäkökulma. Arvonäkökulmien pohjalta hahmotettiin johtopäätöksissä erilaisia tasoja luokanopettajan opetuksen, opiskelun ja oppimisen arvioinnista, kun tieto- ja viestintätekniikka on mukana prosessissa. Kaksi tasoa tarkennettiin verkko-opetuksen arvioinnin fokuksiksi, jotka olivat (1) teknologinen ja (2) tavoitteinen fokus. Teknologisessa arvioinnin fokuksessa (1) arviointia hallitsee huoli oppilaasta ihmisenä. Arviointi liittyy joko siihen, että oppilaan on opittava perustaidot tieto- ja viestintätekniikassa ja ymmärrettävä Internetin mahdollisuudet, tai siihen, että opettaja pyrkii suojelemaan lapsia Internetin uhkilta ja liialliselta altistumiselta esimerkiksi digitaalisille peleille. Verkko-opetuksen tavoitteinen arvioinnin fokus (2) nojaa siihen, minkälaisia tavoitteita luokanopettaja yleisesti asettaa opetukselleen, vaikka tieto- ja viestintätekniikka ei liittyisikään prosessiin. Fokuksen taustalla on myös mediakasvatuksen näkökulma tieto- ja viestintätekniikan opetuskäyttöön, jolloin keskeistä on, että opetuksen suunnittelu ja arviointi ei lähde teknologiasta käsin vaan didaktiikasta ja peda-gogisista arvoista. Verkko-opetuksen arvioinnin fokusten voidaan nähdä täydentävän toisiaan. Fokusten varioinnissa voidaan puhua luokanopettajan kohdalla todellisesta mediataidosta, joka ulottuu yli ammatillisen pätevyyden aina opettajan henkilökohtaiseen kehittymisen kaareen. Avainsanat: Verkko-opetus, arvot, arviointi, opetus-opiskelu-oppimisprosessi, opettajan pedagoginen ajattelu, tieto- ja viestintätekniikan opetuskäyttö Keywords: Network-Based Education (NBE), values, evaluation, teaching-studying-learning process, teacher's pedagogical thinking, educational use of information and communication technologies (ICT)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Red blood cells (RBCs) are the most common type of blood cells in the blood and 99% of the blood cells are RBCs. During the circulation of blood in the cardiovascular network, RBCs squeeze through the tiny blood vessels (capillaries). They exhibit various types of motions and deformed shapes, when flowing through these capillaries with diameters varying between 5 10 µm. RBCs occupy about 45 % of the whole blood volume and the interaction between the RBCs directly influences on the motion and the deformation of the RBCs. However, most of the previous numerical studies have explored the motion and deformation of a single RBC when the interaction between RBCs has been neglected. In this study, motion and deformation of two 2D (two-dimensional) RBCs in capillaries are comprehensively explored using a coupled smoothed particle hydrodynamics (SPH) and discrete element method (DEM) model. In order to clearly model the interactions between RBCs, only two RBCs are considered in this study even though blood with RBCs is continuously flowing through the blood vessels. A spring network based on the DEM is employed to model the viscoelastic membrane of the RBC while the inside and outside fluid of RBC is modelled by SPH. The effect of the initial distance between two RBCs, membrane bending stiffness (Kb) of one RBC and undeformed diameter of one RBC on the motion and deformation of both RBCs in a uniform capillary is studied. Finally, the deformation behavior of two RBCs in a stenosed capillary is also examined. Simulation results reveal that the interaction between RBCs has significant influence on their motion and deformation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The object of this study was to find out which factors made landowners interested in From Sea to Forest co-operation network. Co-operation networks protect biodiversity across boundaries and among groups of landowners with different kind of protection contracts. The social effects of From Sea to Forest - project are studied by analyzing the experience of co-operation and trust. Furthermore the possibility to influence decision making when choosing the pilot areas and doing the contracts was surveyed. Economical effects are estimated for those landowners, who signed a protection contract for ten years. The study is part of The Finnish Forest Research Institute s Ecological considerations in landscape-level collaborative planning of private forestry project. The material of the study comprises 13 interviews done in January 2006; seven interviewed were landowners and six forest professionals. The interviews were transcripted and analyzed with Atlas.ti programme. The economical effects were estimated with MOTTI forest simulation programme. From Sea to Forest project interested the landowners for similar reasons: the voluntariness of participation, compensation, fixed-term contracts and the possibility to protect forests so that the proprietary right remains. It was possible to form four different groups of interviewed landowners according to trust: networkers , opportunists , carefuls and selfemployed . Only in the group of opportunists the project created so much trust that a significant increase of interest to participate in the project was noticed. In all the other groups the project didn t create remarkable trust, so trust didn t have an effect on landowners decisions to participate. Other factors, like compensation and voluntariness were decisive for their interest to participate. From Sea to Forest project wasn t a network based on landowners co-operation, the communication was directly with the project worker. The effects on landowners income by signing a ten year ´Natural value trading´ -contract was analyzed by comparing the protection income with predicted forestry income in case that the protection contract wouldn t have been agreed on. For two landowners there was no suggested forestry work within ten years, so their protection income might be an additional income, if they decided to log their forests later. For three landowners delayed thinning of the sapling stand would cause income losses in the future, if they decided to move to active forestry after ten years of protection. For eight landowners the effect of protection is positive to income if they moved to active forestry after the ten years protection period. This occurred, because the tree stand is now mature for final felling on behalf of its age, but ten more years of growth increase the net present value. Longer term protection might diminish the net present value. The protection was profitable because hectare specific forestry income grew compared to forestry cutting plan income.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses an output feedback control problem for a class of networked control systems (NCSs) with a stochastic communication protocol. Under the scenario that only one sensor is allowed to obtain the communication access at each transmission instant, a stochastic communication protocol is first defined, where the communication access is modelled by a discrete-time Markov chain with partly unknown transition probabilities. Secondly, by use of a network-based output feedback control strategy and a time-delay division method, the closed-loop system is modeled as a stochastic system with multi time-varying delays, where the inherent characteristic of the network delay is well considered to improve the control performance. Then, based on the above constructed stochastic model, two sufficient conditions are derived for ensuring the mean-square stability and stabilization of the system under consideration. Finally, two examples are given to show the effectiveness of the proposed method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The reduction in natural frequencies,however small, of a civil engineering structure, is the first and the easiest method of estimating its impending damage. As a first level screening for health-monitoring, information on the frequency reduction of a few fundamentalmodes can be used to estimate the positions and the magnitude of damage in a smeared fashion. The paper presents the Eigen value sensitivity equations, derived from first-order perturbation technique, for typical infra-structural systems like a simply supported bridge girder, modelled as a beam, an endbearing pile, modelled as an axial rod and a simply supported plate as a continuum dynamic system. A discrete structure, like a building frame is solved for damage using Eigen-sensitivity derived by a computationalmodel. Lastly, neural network based damage identification is also demonstrated for a simply supported bridge beam, where the known-pairs of damage-frequency vector is used to train a neural network. The performance of these methods under the influence of measurement error is outlined. It is hoped that the developed method could be integrated in a typical infra-structural management program, such that magnitudes of damage and their positions can be obtained using acquired natural frequencies, synthesized from the excited/ambient vibration signatures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An approximate dynamic programming (ADP) based neurocontroller is developed for a heat transfer application. Heat transfer problem for a fin in a car's electronic module is modeled as a nonlinear distributed parameter (infinite-dimensional) system by taking into account heat loss and generation due to conduction, convection and radiation. A low-order, finite-dimensional lumped parameter model for this problem is obtained by using Galerkin projection and basis functions designed through the 'Proper Orthogonal Decomposition' technique (POD) and the 'snap-shot' solutions. A suboptimal neurocontroller is obtained with a single-network-adaptive-critic (SNAC). Further contribution of this paper is to develop an online robust controller to account for unmodeled dynamics and parametric uncertainties. A weight update rule is presented that guarantees boundedness of the weights and eliminates the need for persistence of excitation (PE) condition to be satisfied. Since, the ADP and neural network based controllers are of fairly general structure, they appear to have the potential to be controller synthesis tools for nonlinear distributed parameter systems especially where it is difficult to obtain an accurate model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accessibility is a crucial factor for interaction between areas in economic, cultural, political and environmental terms. Therefore, information concerning accessibility is relevant for informed decision making, planning and research. The Loreto region in the Peruvian Amazonia provides an interesting scene for an accessibility study. Loreto is sparsely populated and because there are few roads in the region, in practice all movement and transportation happens along the river network. Due to the proximity of the Andes, river dynamics are strong and annual changes in water level combined with these dynamic processes constantly reshape accessibility patterns of the region. Selling non-timber forest products (NTFP) and agricultural products (AP) in regional centres is an important income source for local rain forest dwellers. Thus, accessibility to the centres is crucial for the livelihood of local population. -- In this thesis I studied how accessible the regional centre Iquitos is from other parts of Loreto. In addition, I studied the regional NTFP/AP trade patterns and compared them with patterns of accessibility. Based on GPS-measurements, using GIS, I created a time-distance surface covering Loreto. This surface describes the time-distance to Iquitos, along the river network. Based on interview material, I assessed annual changes to accessibility patterns in the region. The most common regional NTFP/AP were classified according to the amount of time they can be preserved, and based on the accessibility surface, I modelled a catchment area for each of these product classes. -- According to my results, navigation speeds vary considerably in different parts of the river network, depending on river types, vessels, flow direction and season. Navigating downstream is, generally, faster than upstream navigation. Thus, Iquitos is better accessible from areas situated south and south west of the city, like along the rivers Ucayali and Marañon. Differences in accessibility between different seasons are also substantial: during the dry season navigation is slower due to lower water levels and emerging sand bars. Regularly operating boats follow routes only along certain rivers and close to Iquitos transport facilities are more abundant than in more distant areas. Most of the products present in Iquitos market places are agricultural products, and the share of NTFP is significantly smaller. Most of the products were classified in product class 2, and the catchment area for these products is rather small. Many products also belonged to class 5, and the catchment area for these products reaches up to the edges of my study area, following the patterns of the river network. -- The accessibility model created in this study predicts travel times relatively well, although in some cases the modelled time-distances are substantially shorter than observed time-distances. This is partly caused by the fact that real-life navigation routes are more complicated than the modelled routes. Rain forest dwellers having easier access to Iquitos have more opportunities in terms of the products they decide to market. Thus, they can better take advantage of other factors affecting the market potential of different products. -- In all, understanding spatial variation in accessibility is important. In the Amazonian context it is difficult to combine the accessibility-related needs of the local dwellers with conservation purposes and the future challenge lies in finding solution that satisfy both of these needs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reduced expression of CCR5 on target CD4(+) cells lowers their susceptibility to infection by R5-tropic HIV-1, potentially preventing transmission of infection and delaying disease progression. Binding of the HIV-1 envelope (Env) protein gp120 with CCR5 is essential for the entry of R5 viruses into target cells. The threshold surface density of gp120-CCR5 complexes that enables HIV-1 entry remains poorly estimated. We constructed a mathematical model that mimics Env-mediated cell-cell fusion assays, where target CD4(+)CCR5(+) cells are exposed to effector cells expressing Env in the presence of a coreceptor antagonist and the fraction of target cells fused with effector cells is measured. Our model employs a reaction network-based approach to describe protein interactions that precede viral entry coupled with the ternary complex model to quantify the allosteric interactions of the coreceptor antagonist and predicts the fraction of target cells fused. By fitting model predictions to published data of cell-cell fusion in the presence of the CCR5 antagonist vicriviroc, we estimated the threshold surface density of gp120-CCR5 complexes for cell-cell fusion as similar to 20 mu m(-2). Model predictions with this threshold captured data from independent cell-cell fusion assays in the presence of vicriviroc and rapamycin, a drug that modulates CCR5 expression, as well as assays in the presence of maraviroc, another CCR5 antagonist, using sixteen different Env clones derived from transmitted or early founder viruses. Our estimate of the threshold surface density of gp120-CCR5 complexes necessary for HIV-1 entry thus appears robust and may have implications for optimizing treatment with coreceptor antagonists, understanding the non-pathogenic infection of non-human primates, and designing vaccines that suppress the availability of target CD4(+)CCR5(+) cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.