745 resultados para derogatory labels


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro intende indagare la formulazione di nozioni autonome di diritto europeo nell’ambito dell’imposta sul valore aggiunto elaborate dalla Corte di giustizia. Più specificamente si intende proporre una tassonomia convenzionale delle nozioni autonome, utile alla loro sistematizzazione e miglior comprensione, fondata sulla distinzione fondamentale tra nozioni autonome e nozioni proprie. A questo scopo il lavoro propone un’ampia analisi delle nozioni nella giurisprudenza europea che vengono distinte a seconda che l’obiettivo finale dell’autonomia sia quello di limitare la discrezionalità applicativa degli Stati relativamente a ipotesi derogatorie, nel caso delle nozioni autonome in senso stretto; oppure che l’obiettivo ultimo sia quello di meglio definire l’ambito di applicazione di nozioni fondamentali nella struttura dell’imposta, nel caso delle nozioni proprie. Lo scopo del lavoro è dimostrare che la definizione funzionalistica risponde a obiettivi diversi, accomunati dal fatto di far valere comunque il primato dell'ordinamento europeo. L’analisi e la sistematizzazione proposte possono essere uno strumento utile nell’attuazione, interpretazione e applicazione delle nozioni. In questo senso, il lavoro esamina anche esempi tratti dalle esperienze nazionali che dimostrino, da un lato, che gli interventi della Corte relativi alle nozioni influenzano gli ordinamenti nazionali; dall’altro, che permangono ancora disallineamenti. L’analisi della giurisprudenza europea e dei regimi nazionali, infine, vuole dimostrare che l’elaborazione di nozioni autonome si inserisce nel processo verso un’armonizzazione sempre più incisiva in ambito IVA, di cui è protagonista la Corte di giustizia. Questo processo, cui l’elaborazione di un “tessuto concettuale europeo” contribuisce, comporta una limitazione della discrezionalità degli Stati, e quindi del principio di attribuzione, in nome dell’efficacia e della primazia del diritto europeo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After initial efforts in the late 1980s, the interest in thermochemiluminescence (TCL) as an effective detection technique has gradually faded due to some drawbacks, such as the high temperatures required to trigger the light emission and the relatively low intensities, which determined a poor sensitivity. Recent advances made with the adoption of variably functionalized 1,2-dioxetanes as innovative luminophores, have proved to be a promising approach for the development of reagentless and ultrasensitive detection methods exploitable in biosensors by using TCL compounds as labels, as either single molecules or included in modified nanoparticles. In this PhD Thesis, a novel class of N-substituted acridine-containing 1,2-dioxetanes was designed, synthesized, and characterized as universal TCL probes endowed with optimal emission-triggering temperatures and higher detectability particularly useful in bioanalytical assays. The different decorations introduced by the insertion of both electron donating (EDGs) and electron withdrawing groups (EWGs) at the 2- and 7-positions of acridine fluorophore was found to profoundly affect the photophysical properties and the activation parameters of the final 1,2-dioxetane products. Challenges in the synthesis of 1,2-dioxetanes were tackled with the recourse to continuous flow photochemistry to achieve the target parent compound in high yields, short reaction time, and easy scalability. Computational studies were also carried out to predict the olefins reactivity in the crucial photooxygenation reaction as well as the final products stability. The preliminary application of TCL prototype molecule has been performed in HaCaT cell lines showing the ability of these molecules to be detected in real biological samples and cell-based assays. Finally, attempts on the characterization of 1,2-dioxetanes in different environments (solid state, optical glue and nanosystems) and the development of bioconjugated TCL probes will be also presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Depth represents a crucial piece of information in many practical applications, such as obstacle avoidance and environment mapping. This information can be provided either by active sensors, such as LiDARs, or by passive devices like cameras. A popular passive device is the binocular rig, which allows triangulating the depth of the scene through two synchronized and aligned cameras. However, many devices that are already available in several infrastructures are monocular passive sensors, such as most of the surveillance cameras. The intrinsic ambiguity of the problem makes monocular depth estimation a challenging task. Nevertheless, the recent progress of deep learning strategies is paving the way towards a new class of algorithms able to handle this complexity. This work addresses many relevant topics related to the monocular depth estimation problem. It presents networks capable of predicting accurate depth values even on embedded devices and without the need of expensive ground-truth labels at training time. Moreover, it introduces strategies to estimate the uncertainty of these models, and it shows that monocular networks can easily generate training labels for different tasks at scale. Finally, it evaluates off-the-shelf monocular depth predictors for the relevant use case of social distance monitoring, and shows how this technology allows to overcome already existing strategies limitations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By investigating the inner working of leading financial institutions, and their dense interconnections, this thesis explores the evolution of traditional financial instruments like bonds to tackle sustainability issues. Building on fieldwork among green financiers, the thesis is based upon participant observation of working groups appointed to define standards for sustainable bonds. Engaging critical theory, one claim is that investors are increasingly recruited or interpellated by an emerging global green ideological apparatus, aimed at ensuring the reproduction of existing social relations. Taking stock of the proliferation of both public and private actors in the definition of green standards and practices, the thesis proposes that this green ideology is becoming hegemonic. Focusing on the case of green bond pricing, it suggests that environmental and climate labels and other financial green signifiers for financial products take on brand-like qualities. Crystallizing imaginaries, meanings, and forms of personhood, they play a fundamental role in what is defined as a dual process of valuation-cum-subjectivation. Identifying themselves as “green”, financiers valuate differently green and brown assets allowing a ‘green’ financial value to slowly come to matter. Yet, alongside their ideological role, green labels have come to be almost exclusively standardized with reference to specific Climate Scenarios (e.g. Net Zero). These scenarios coordinate the optimal path towards achieving a carbon neutral world and represent the quintessential example of socioeconomic planning, crucially undermining neoliberal ideas of ‘the market’ as the ultimate calculative device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La classificazione di dati geometrici 3D come point cloud è un tema emergente nell'ambito della visione artificiale in quanto trova applicazione in molteplici contesti di guida autonoma, robotica e realtà aumentata. Sebbene nel mercato siano presenti una grande quantità di sensori in grado di ottenere scansioni reali, la loro annotazione costituisce un collo di bottiglia per la generazione di dataset. Per sopperire al problema si ricorre spesso alla domain adaptation sfruttando dati sintetici annotati. Questo elaborato si pone come obiettivo l'analisi e l'implementazione di metodi di domain adaptation per classificazione di point cloud mediante pseudo-labels. In particolare, sono stati condotti esperimenti all'interno del framework RefRec valutando la possibilità di sostituire nuove architetture di deep learning al modello preesistente. Tra queste, Transformer con mascheramento dell'input ha raggiunto risultati superiori allo stato dell'arte nell'adattamento da dati sintetici a reali (ModelNet->ScanNet) esaminato in questa tesi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of product presentation in the marketing industry is well known. Labels are crucial for providing information to the buyer, but at a modest additional expense, a beautiful label with exquisite embellishments may also give the goods a sensation of high quality and elegance. Enhancing the capabilities of stamping machines is required to keep up with the increasing velocity of the production lines in the modern manufacturing industry and to offer new opportunities for customization. It’s in this context of improvements and refinements that this work takes place. The thesis was developed during an internship at Studio D, the firm that designs the mechanics of the machines produced by Cartes. The The aim of this work is to study possible upgrades for the existing hot stamping machines. The main focus of this work is centred on two objectives: first, evaluating the pressing forces generated by this machine and characterising how the mat used in the stamping process reacts to such forces. Second, propose a new conformation for the press mechanism in order to improve the rigidity and performance of the machines. The first objective is reached through a combined approach: the mat is crudely characterized with experimental data, while the frame of the machine is studied through FEM analysis. The results obtained are combined and used to upgrade a worksheet that allows to estimate the forces exerted by the machines. The second objective is reached with the proposal of new, improved designs for the main components of the machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to create hybrid systems that blend different paradigms has now become a requirement for complex AI systems usually made of more than a component. In this way, it is possible to exploit the advantages of each paradigm and exploit the potential of different approaches such as symbolic and non-symbolic approaches. In particular, symbolic approaches are often exploited for their efficiency, effectiveness and ability to manage large amounts of data, while symbolic approaches are exploited to ensure aspects related to explainability, fairness, and trustworthiness in general. The thesis lies in this context, in particular in the design and development of symbolic technologies that can be easily integrated and interoperable with other AI technologies. 2P-Kt is a symbolic ecosystem developed for this purpose, it provides a logic-programming (LP) engine which can be easily extended and customized to deal with specific needs. The aim of this thesis is to extend 2P-Kt to support constraint logic programming (CLP) as one of the main paradigms for solving highly combinatorial problems given a declarative problem description and a general constraint-propagation engine. A real case study concerning school timetabling is described to show a practical usage of the CLP(FD) library implemented. Since CLP represents only a particular scenario for extending LP to domain-specific scenarios, in this thesis we present also a more general framework: Labelled Prolog, extending LP with labelled terms and in particular labelled variables. The designed framework shows how it is possible to frame all variations and extensions of LP under a single language reducing the huge amount of existing languages and libraries and focusing more on how to manage different domain needs using labels which can be associated with every kind of term. Mapping of CLP into Labeled Prolog is also discussed as well as the benefits of the provided approach.