922 resultados para Expert Opinions
Resumo:
The new Swiss Federal Patent Court, with nationwide first-instance jurisdiction over all civil patent matters, has been operating since 1 January 2012. This article reviews and contextualizes the most important patent cases the Swiss Federal Patent Court and the Swiss Federal Supreme Court. It concludes that the revamped Swiss patent litigation system has the potential of turning Switzerland into a competitive venue for the adjudication of patent matters in Europe.
Resumo:
The new Swiss Federal Patent Court, with nationwide first-instance jurisdiction over all civil patent matters, has been operating since January 1, 2012. This article reviews and contextualizes the most important patent cases published in 2012 by the Swiss Federal Patent Court and the Swiss Federal Supreme Court. More specifically, the article covers cases on issues such as the evidentiary status of party expert opinions, the formal requirements for requests for injunctive relief, the infringement and non-obviousness tests employed by the Swiss Federal Patent Court, the use of reports and statements from technical judges in lieu of expert opinions, and the procedural devices for the pre-trial taking of evidence, in particular the new patent-specific device of precise description. The author suggests that designing the Federal Patent Court to include technically trained judges may lead to a more automatic adoption of the practices and case law of the European Patent Office. The article concludes that the revamped Swiss patent litigation system has the potential of turning Switzerland into a competitive venue for the adjudication of patent matters in Europe.
Resumo:
Swiss aquaculture farms were assessed according to their risk of acquiring or spreading viral haemorrhagic septicaemia (VHS) and infectious haematopoietic necrosis (IHN). Risk factors for the introduction and spread of VHS and IHN were defined and assessed using published data and expert opinions. Among the 357 aquaculture farms identified in Switzerland, 49.3% were categorised as high risk, 49.0% as medium risk and 1.7% as low risk. According to the new Directive 2006/88/EC for aquaculture of the European Union, the frequency of farm inspections must be derived from their risk levels. A sensitivity analysis showed that water supply and fish movements were highly influential on the output of the risk assessment regarding the introduction of VHS and IHN. Fish movements were also highly influential on the risk assessment output regarding the spread of these diseases.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
The figure of the coordinator in health and safety issues in the construction sector first appeared in our legislation through the incorporation of the European Directives (in our case Royal Decree 1627/97 on the minimum health and safety regulations in construction works), and is viewed differently in different countries of the European Union regarding the way they are hired and their role in the construction industry. Coordinating health and safety issues is also a management process that requires certain competencies that are not only based on technical or professional training, but which, taking account of the work environment, require the use of strategies and tools that are related to experience and personal skills. Through a piece of research that took account of expert opinions in the matter, we have found which competencies need to be possessed by the health and safety coordinator in order to improve the safety in the works they are coordinating. The conclusions of the analyses performed using the appropriate statistical methods (comparing means and multivariate analysis techniques), will enable training programmes to be designed and ensure that the health and safety coordinators selected have the competencies required to carry out their duties.
Resumo:
The figure of the coordinator in health and safety issues in the construction sector first appeared in our legislation through the incorporation of the European Directives (in our case Royal Decree 1627/97 on the minimum health and safety regulations in construction works), and is viewed differently in different countries of the European Union regarding the way they are hired and their role in the construction industry. Coordinating health and safety issues is also a management process that requires certain competencies that are not only based on technical or professional training, but which, taking account of the work environment, require the use of strategies and tools that are related to experience and personal skills. Through a piece of research that took account of expert opinions in the matter, we have found which competencies need to be possessed by the health and safety coordinator in order to improve the safety in the works they are coordinating. The conclusions of the analyses performed using the appropriate statistical methods (comparing means and multivariate analysis techniques), will enable training programmes to be designed and ensure that the health and safety coordinators selected have the competencies required to carry out their duties.
Resumo:
This paper addresses the historical evolution of, from its inception, to the present day, within the changing context of EHEA and linked to professional competences. The research methodology, although it is mainly a historical document review, expert opinions on university educational planning of university education of forestry engineering in Spain are also included. The results show the evolution of centralized planning, based on technical knowledge transmission to an approach based on competences (technical, contextual and behavioral) focusing on learning for improving employability.
Resumo:
Las desviaciones de tiempo y coste constituyen un fenómeno muy frecuente en la industria de la construcción. Existe un gran número de proyectos que no se terminan en el plazo y el tiempo estipulados, y esto parece que se ha convertido más en la norma que en la excepción. Los proyectos de construcción son heterogéneos por naturaleza y pueden llegar a ser muy complejos, involucrando numerosos procesos y expuestos a infinidad de variables y factores que pueden afectar el cumplimiento de los objetivos de tiempo y coste. Las desviaciones de tiempo y coste no favorecen ni al promotor ni al resto de equipos participantes del proyecto, dando lugar además la mayoría de las veces a situaciones de conflictos y relaciones adversas entre participantes del proyecto. Es por todo ello que surge la necesidad de atender a una estrategia de gestión de riesgos eficaz, como herramienta esencial de la gestión de proyectos para contribuir al éxito de los mismos. Es preciso considerar también que los proyectos de construcción pueden presentar distintas características específicas según el tipo de proyecto de que se traten. El presente trabajo de investigación estudia concretamente los proyectos de edificios de uso hotelero, los cuales pueden presentar estructuras organizativas muy diversas, incluyendo numerosos agentes participantes y procesos que a su vez se desarrollan en un entorno que ya es muy dinámico por su propia naturaleza. En el sector hotelero el cumplimiento de los objetivos de tiempo y coste del proyecto son especialmente importantes ya que cualquier retraso en la fecha de apertura estimada del hotel se traducirá en pérdidas importantes de negocio y cuota de mercado y podrá llevar asociadas también repercusiones importantes en otros aspectos relacionados con la operativa hotelera. Si se conocen las causas que originan tales desviaciones de tiempo y coste, se podrán establecer las correspondientes medidas de actuación para anticiparnos a ellas y que no se produzcan, siendo ésta la base del propósito de esta tesis. Así, la identificación de riesgos supone el primer paso para una gestión de riesgos eficaz, fundamental para contribuir al éxito de un proyecto. El contexto de la investigación delimita como lugar geográfico de estudio España, donde el sector turístico constituye un motor importante de la economía y en el que la eficiencia y competitividad debe estar reflejada también en el proceso del proyecto edificatorio, minimizándose los retrasos y sobrecostes. El presente estudio investiga por tanto los factores de riesgo más críticos que dan lugar a desviaciones de tiempo y coste en proyectos de edificios de uso hotelero en España. A partir del análisis de la literatura existente se genera una propuesta de identificación de factores de riesgo, que se analiza mediante un análisis cualitativo basado en la opinión de expertos y estudio de casos específicos. De los resultados de este análisis se determinan los niveles críticos para cada factor de riesgo, se comparan además las percepciones de niveles de riesgo según distintos tipos de grupos profesionales, y se establece un procedimiento en cuanto a prioridad de acción de respuesta. Así, se desarrolla una propuesta final de identificación y matriz de factores de riesgo con el objetivo de que pueda servir de base a propietarios, empresas gestoras y otros participantes de proyectos hoteleros para diseñar un plan de gestión de riesgos eficaz, contribuyendo de este modo al éxito del proyecto en cuanto a cumplimiento de objetivos de tiempo y coste programados. ABSTRACT Cost and schedule overruns constitute a very frequent phenomenon in the construction industry. A large number of construction projects do not finish on the estimated time and cost, and this scenario seems to be becoming the norm rather than the exception. Construction projects are heterogeneous by nature and they can become very complex as they involve a large number of processes which are subject to many variables and factors that may give rise to time and cost overruns. Time and cost overruns cause dissatisfaction not only to owners but all stakeholders involved in the project, leading most of the times to undesirable situations of conflicts and adversarial relationships between project participants. Hence, it becomes necessary to adopt an effective risk management strategy as an essential part of project management in order to achieve project success. Construction projects may have different characteristics depending on the type of project. This research specifically focuses on hotel construction projects. Hotel projects usually involve complex organizational structures, including many project participants and processes which develop in an environment that is already dynamic by nature. In this type of projects, the achievement of time and cost objectives is particularly important, as any delay of the hotel opening date will result in significant loss of business and market share and may also involve key important implications related to hotel operations. If the risk factors that lead to time and cost overrun are known in advance, preventive actions could be established in order to avoid them, so that time and cost overruns are minimized. This constitutes the aim of this research, being risk identification the first step of any effective risk management strategy for project success. The context of this research is focused on a particular geographical area, being this Spain. Tourism in Spain is a major contributor to the Spanish economy, and efficiency and competiveness should also be reflected in the building processes of the hotel industry, where delays and cost overruns should be kept to the minimum. The aim of this study is to explore the most critical risk factors leading to time and cost overruns in hotel construction projects in Spain. From the analysis of the literature review, a proposal of a risk identification framework is developed, which will be further analyzed by a qualitative assessment based on expert opinions and the study of specific case studies. From the results of this assessment, the levels of risk criticality are determined for the identified factors, a comparison of the perceptions of risk levels among different groups of respondents is also carried out, and a procedure for prioritization of factors in terms of needs of response is established. A final proposal of a risk register matrix framework is then developed in order to assist hotel owners, project management companies or other hotel project stakeholders, and provide them with a base to design their own specific risk management plans, contributing in this way to project success with regards to the achievement of cost and time objectives.
Resumo:
Background & Aims: Treatment of chronic hepatitis B (CHB) involves a number of complex and controversial issues. Expert opinions may differ from those of practicing hepatologists and gastroenterologists. We aimed to explore this issue further after a critical review of the literature. Methods: A panel of 14 international experts graded the strength of evidence for 16 statements addressing 3 content areas: patient selection, therapeutic end points, and treatment options. Available data relating to the statements were reviewed critically in 3 small work groups. After discussion of each statement with the entire panel, the experts voted anonymously to accept or reject statements based on the strength of evidence and their experience. A total of 241 members of the American Association for the Study of Liver Diseases (AASLD) responded to the same statements and their responses were compared with those of the experts. A discordant response was defined as a difference of more than 20% in any of the 5 graded levels of response (accept or reject) between the 2 groups. Results: With the exception of 2 statements, the experts’ responses were relatively uniform. However, the responses of the AASLD members were discordant from the experts in 12 statements, spanning all 3 content areas. Conclusions: Several areas of disagreement on the management of CHB exist between experts and AASLD members. Our results indicate a potential knowledge gap among practicing hepatologists. Better educational efforts are needed to meet the challenge of managing this complex disorder in which even expert opinion occasionally may disagree.
Resumo:
In this paper a Hierarchical Analytical Network Process (HANP) model is demonstrated for evaluating alternative technologies for generating electricity from MSW in India. The technological alternatives and evaluation criteria for the HANP study are characterised by reviewing the literature and consulting experts in the field of waste management. Technologies reviewed in the context of India include landfill, anaerobic digestion, incineration, pelletisation and gasification. To investigate the sensitivity of the result, we examine variations in expert opinions and carry out an Analytical Hierarchy Process (AHP) analysis for comparison. We find that anaerobic digestion is the preferred technology for generating electricity from MSW in India. Gasification is indicated as the preferred technology in an AHP model due to the exclusion of criteria dependencies and in an HANP analysis when placing a high priority on net output and retention time. We conclude that HANP successfully provides a structured framework for recommending which technologies to pursue in India, and the adoption of such tools is critical at a time when key investments in infrastructure are being made. Therefore the presented methodology is thought to have a wider potential for investors, policy makers, researchers and plant developers in India and elsewhere. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Background Despite advances made in treating coronary heart disease (CHD), mortality due to CHD in Syria has been increasing for the past two decades. This study aims to assess CHD mortality trends in Syria between 1996 and 2006 and to investigate the main factors associated with them. Methods The IMPACT model was used to analyze CHD mortality trends in Syria based on numbers of CHD patients, utilization of specific treatments, trends in major cardiovascular risk factors in apparently healthy persons and CHD patients. Data sources for the IMPACT model included official statistics, published and unpublished surveys, data from neighboring countries, expert opinions, and randomized trials and meta-analyses. Results Between 1996 and 2006, CHD mortality rate in Syria increased by 64%, which translates into 6370 excess CHD deaths in 2006 as compared to the number expected had the 1996 baseline rate held constant. Using the IMPACT model, it was estimated that increases in cardiovascular risk factors could explain approximately 5140 (81%) of the CHD deaths, while some 2145 deaths were prevented or postponed by medical and surgical treatments for CHD. Conclusion Most of the recent increase in CHD mortality in Syria is attributable to increases in major cardiovascular risk factors. Treatments for CHD were able to prevent about a quarter of excess CHD deaths, despite suboptimal implementation. These findings stress the importance of population-based primary prevention strategies targeting major risk factors for CHD, as well as policies aimed at improving access and adherence to modern treatments of CHD.
Resumo:
The purpose of this study is to investigate the biometrics technologies adopted by hotels and the perception of hotel managers toward biometric technology applications. A descriptive, cross sectional survey was developed based on extensive review of literature and expert opinions. The population for this survey was property level executive managers in the U.S. hotels. Members of American Hotel and Lodging Association (AHLA) were selected as the target population for this study. The most frequent use of biometric technology is by hotel employees in the form of fingerprint scanning. Cost still seems to be one of the major barriers to adoption of biometric technology applications. The findings of this study showed that there definitely is a future in using biometric technology applications in hotels in the future, however, according to hoteliers; neither guests nor hoteliers are ready for it fully.
Resumo:
The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.
Resumo:
This paper deals with a very important issue in any knowledge engineering discipline: the accurate representation and modelling of real life data and its processing by human experts. The work is applied to the GRiST Mental Health Risk Screening Tool for assessing risks associated with mental-health problems. The complexity of risk data and the wide variations in clinicians' expert opinions make it difficult to elicit representations of uncertainty that are an accurate and meaningful consensus. It requires integrating each expert's estimation of a continuous distribution of uncertainty across a range of values. This paper describes an algorithm that generates a consensual distribution at the same time as measuring the consistency of inputs. Hence it provides a measure of the confidence in the particular data item's risk contribution at the input stage and can help give an indication of the quality of subsequent risk predictions. © 2010 IEEE.
Resumo:
This is the first comprehensive analysis of the regulation of money market funds in the EU and US at both the theoretical and practical levels. Its unique mutli-disciplinary approach provides a rigorous framework for comparative analysis and expert opinions on complex regulations that will help practitioners with decisions on portfolio management and in solving regulatory compliance issues. The theoretical framework has unique cases and examples and includes checklists to assist with the practice of fund management and legal risk analysis.