830 resultados para Gradient-based approaches


Relevância:

80.00% 80.00%

Publicador:

Resumo:

El terrorismo es considerado en la Estrategia Global para la Política Exterior y de Seguridad de la UE como una de las principales amenazas a la seguridad de la Unión Europea. La lucha contra el terrorismo ha dado sus frutos en los últimos quince años, pero este artículo analiza la nueva Estrategia y se pregunta si será suficiente para responder con eficacia a esta amenaza y si se están empleando todos los medios necesarios para atajarla.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article provides an overview of the relevance and import of the U.N. Convention on the Rights of the Child (CRC) to child health practice and pediatric bioethics. We discuss the four general principles of the CRC that apply to the implementation of all rights contained in the document, the right to health articulated in Article 24, and the important position ascribed to parents in fulfilling the rights of their children. We then examine how the CRC is implemented and monitored in law and practice. The CRC and associated principles of child rights provide strategies for rights-based approaches to clinical practice and health systems, as well as to policy design, professional training, and health services research. In light of the relevance of the CRC and principles of child rights to children’s health and child health practice, it follows that there is an intersection between child rights and pediatric bioethics. Pediatric bioethicists and child rights advocates should work together to define this intersection in all domains of pediatric practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper, presented as the 9th Martin Tansey Memorial Lecture in April 2016, considers current and future approaches to sex offender reintegration. It critically examines the core models of reintegration in terms of risk-based and strengths-based approaches in the criminal justice context as well as barriers to reintegration, chiefly in terms of the community and negative public attitudes. It also presents an overview of new findings from recent empirical research on sex offender desistance, generally referred to the as the process of slowing down or ceasing of criminal behaviour. Finally, the paper presents an optimum vision in terms of re-thinking sex offender reintegration, and what I term ‘inverting the risk paradigm’, drawing out the key challenges and implications for criminal justice as well as society more broadly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The sudden change in environmental munificence level in the construction sector
during the period 2007 – 2015 provides a natural experiment to investigate strategic
and operating actions of firms, particularly during an environmental jolt. Statistics on
business failures corroborate that neither academics nor practitioners have succeeded
in guiding strategic action during periods of environmental jolt. Despite the recent
increase of turnaround research in the general management domain, its use in the
construction management realm remains underexplored. To address this research
gap, five exploratory case studies of an ongoing PhD study were used to examine the
turnaround strategies of construction contractors during a period of economic
contraction and growth. The findings show that, although retrenchment is often
considered to be a short-term strategy, this is clearly not the case; with the majority of
contractors maintaining the strategy for 6-7 years. During the same period,
internationalization became critical, with the turnaround process shifting towards
strategic reorientation that altered the firms' market domain. The case studies further
suggest that strategic and operational actions resonate quite well with contemporary
practice-based approaches to strategy making. The findings provide valuable
assistance for construction contractors in dealing with organisational decline and in
developing a successful turnaround response.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The book chapter examines the conundrums and contradictions for PSNI in delivering their community policing agenda within a post-conflict environment which simultaneously demands the delivery of counter-terrorism policing in view of the current dissident terrorist threat.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This multi-perspectival Interpretive Phenomenological Analysis (IPA) study explored how people in the ‘networks of concern’ talked about how they tried to make sense of the challenging behaviours of four children with severe learning disabilities. The study also aimed to explore what affected relationships between people. The study focussed on 4 children through interviewing their mothers, their teachers and the Camhs Learning Disability team members who were working with them. Two fathers also joined part of the interviews. All interviews were conducted separately using a semi-structured approach. IPA allowed both a consideration of the participant’s lived experiences and ‘objects of concern’ and a deconstruction of the multiple contexts of people’s lives, with a particular focus on disability. The analysis rendered five themes: the importance of love and affection, the difficulties, and the differences of living with a challenging child, the importance of being able to make sense of the challenges and the value of good relationships between people. Findings were interpreted through the lens of CMM (Coordinated Management of Meaning), which facilitated a systemic deconstruction and reconstruction of the findings. The research found that making sense of the challenges was a key concern for parents. Sharing meanings were important for people’s relationships with each other, including employing diagnostic and behavioural narratives. The importance of context is also highlighted including a consideration of how societal views of disability have an influence on people in the ‘network of concern’ around the child. A range of systemic approaches, methods and techniques are suggested as one way of improving services to these children and their families. It is suggested that adopting a ‘both/and’ position is important in such work - both applying evidence based approaches and being alert to and exploring the different ways people try and make sense of the children’s challenges. Implications for practice included helping professionals be alert to their constructions and professional narratives, slowing the pace with families, staying close to the concerns of families and addressing network issues.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is increasing advocacy for inclusive community-based approaches to environmental management, and growing evidence that involving communities improves the sustainability of social-ecological systems. Most community-based approaches rely on partnerships and knowledge exchange between communities, civil society organizations, and professionals such as practitioners and/or scientists. However, few models have actively integrated more horizontal knowledge exchange from community to community. We reflect on the transferability of community owned solutions between indigenous communities by exploring challenges and achievements of community peer-to-peer knowledge exchange as a way of empowering communities to face up to local environmental and social challenges. Using participatory visual methods, indigenous communities of the North Rupununi (Guyana) identified and documented their community owned solutions through films and photostories. Indigenous researchers from this community then shared their solutions with six other communities that faced similar challenges within Guyana, Suriname, Venezuela, Colombia, French Guiana, and Brazil. They were supported by in-country civil society organizations and academics. We analyzed the impact of the knowledge exchange through interviews, field reports, and observations. Our results show that indigenous community members were significantly more receptive to solutions emerging from, and communicated by, other indigenous peoples, and that this approach was a significant motivating force for galvanizing communities to make changes in their community. We identified a range of enabling factors, such as building capacity for a shared conceptual and technical understanding, that strengthens the exchange between communities and contributes to a lasting impact. With national and international policy-makers mobilizing significant financial resources for biodiversity conservation and climate change mitigation, we argue that the promotion of community owned solutions through community peer-to-peer exchange may deliver more long-lasting, socially and ecologically integrated, and investment-effective strategies compared to top-down, expert led, and/or foreign-led initiatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hyperspectral sensors are being developed for remote sensing applications. These sensors produce huge data volumes which require faster processing and analysis tools. Vertex component analysis (VCA) has become a very useful tool to unmix hyperspectral data. It has been successfully used to determine endmembers and unmix large hyperspectral data sets without the use of any a priori knowledge of the constituent spectra. Compared with other geometric-based approaches VCA is an efficient method from the computational point of view. In this paper we introduce new developments for VCA: 1) a new signal subspace identification method (HySime) is applied to infer the signal subspace where the data set live. This step also infers the number of endmembers present in the data set; 2) after the projection of the data set onto the signal subspace, the algorithm iteratively projects the data set onto several directions orthogonal to the subspace spanned by the endmembers already determined. The new endmember signature corresponds to these extreme of the projections. The capability of VCA to unmix large hyperspectral scenes (real or simulated), with low computational complexity, is also illustrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hyponatraemia, defined as a serum sodium concentration <135 mmol/l, is the most common disorder of body fuid and electrolyte balance encountered in clinical practice. It can lead to a wide spectrum of clinical symptoms, from subtle to severe or even life threatening, and is associated with increased mortality, morbidity and length of hospital stay in patients presenting with a range of conditions. Despite this, the management of patients remains problematic. The prevalence of hyponatraemia in widely different conditions and the fact that hyponatraemia is managed by clinicians with a broad variety of backgrounds have fostered diverse institution-and speciality-based approaches to diagnosis and treatment. To obtain a common and holistic view, the European Society of Intensive Care Medicine (ESICM), the European Society of Endocrinology (ESE) and the European Renal Association-European Dialysis and Transplant Association (ERA-EDTA), represented by European Renal Best Practice (ERBP), have developed the Clinical Practice Guideline on the diagnostic approach and treatment of hyponatraemia as a joint venture of three societies representing specialists with a natural interest in hyponatraemia. In addition to a rigorous approach to methodology and evaluation, we were keen to ensure that the document focused on patient-important outcomes and included utility for clinicians involved in everyday practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last years, special attention has been devoted to food-induced allergies, from which hazelnut allergy is highlighted. Hazelnut is one of the most commonly consumed tree nuts, being largely used by the food industry in a wide variety of processed foods. It has been regarded as a food with potential health benefits, but also as a source of allergens capable of inducing mild to severe allergic reactions in sensitised individuals. Considering the great number of reports addressing hazelnut allergens, with an estimated increasing trend, this review intends to assemble all the relevant information available so far on the main issues: prevalence of tree nut allergy, clinical threshold levels, molecular characterisation of hazelnut allergens (Cor a 1, Cor a 2, Cor a 8, Cor a 9, Cor a 10, Cor a 11, Cor a 12, Cor a 14 and Cor a TLP) and their clinical relevance, and methodologies for hazelnut allergen detection in foods. A comprehensive overview on the current data about the molecular characterisation of hazelnut allergens is presented, relating biochemical classification and biological function with clinical importance. Recent advances on hazelnut allergen detection methodologies are summarised and compared, including all the novel protein- and DNA-based approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.