23 resultados para Axiomatic Models of Resource Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work I tried to explore many aspects of cognitive visual science, each one based on different academic fields, proposing mathematical models capable to reproduce both neuro-physiological and phenomenological results that were described in the recent literature. The structure of my thesis is mainly composed of three chapters, corresponding to the three main areas of research on which I focused my work. The results of each work put the basis for the following, and their ensemble form an homogeneous and large-scale survey on the spatio-temporal properties of the architecture of the visual cortex of mammals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In silico methods, such as musculoskeletal modelling, may aid the selection of the optimal surgical treatment for highly complex pathologies such as scoliosis. Many musculoskeletal models use a generic, simplified representation of the intervertebral joints, which are fundamental to the flexibility of the spine. Therefore, to model and simulate the spine, a suitable representation of the intervertebral joint is crucial. The aim of this PhD was to characterise specimen-specific models of the intervertebral joint for multi-body models from experimental datasets. First, the project investigated the characterisation of a specimen-specific lumped parameter model of the intervertebral joint from an experimental dataset of a four-vertebra lumbar spine segment. Specimen-specific stiffnesses were determined with an optimisation method. The sensitivity of the parameters to the joint pose was investigate. Results showed the stiffnesses and predicted motions were highly depended on both the joint pose. Following the first study, the method was reapplied to another dataset that included six complete lumbar spine segments under three different loading conditions. Specimen-specific uniform stiffnesses across joint levels and level-dependent stiffnesses were calculated by optimisation. Specimen-specific stiffness show high inter-specimen variability and were also specific to the loading condition. Level-dependent stiffnesses are necessary for accurate kinematic predictions and should be determined independently of one another. Secondly, a framework to create subject-specific musculoskeletal models of individuals with severe scoliosis was developed. This resulted in a robust codified pipeline for creating subject-specific, severely scoliotic spine models from CT data. In conclusion, this thesis showed that specimen-specific intervertebral joint stiffnesses were highly sensitive to joint pose definition and the importance of level-dependent optimisation. Further, an open-source codified pipeline to create patient-specific scoliotic spine models from CT data was released. These studies and this pipeline can facilitate the specimen-specific characterisation of the scoliotic intervertebral joint and its incorporation into scoliotic musculoskeletal spine models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both compressible and incompressible porous medium models are used in the literature to describe the mechanical aspects of living tissues. Using a stiff pressure law, it is possible to build a link between these two different representations. In the incompressible limit, compressible models generate free boundary problems where saturation holds in the moving domain. Our work aims at investigating the stiff pressure limit of reaction-advection-porous medium equations motivated by tumor development. Our first study concerns the analysis and numerical simulation of a model including the effect of nutrients. A coupled system of equations describes the cell density and the nutrient concentration and the derivation of the pressure equation in the stiff limit was an open problem for which the strong compactness of the pressure gradient is needed. To establish it, we use two new ideas: an L3-version of the celebrated Aronson-Bénilan estimate, and a sharp uniform L4-bound on the pressure gradient. We further investigate the sharpness of this bound through a finite difference upwind scheme, which we prove to be stable and asymptotic preserving. Our second study is centered around porous medium equations including convective effects. We are able to extend the techniques developed for the nutrient case, hence finding the complementarity relation on the limit pressure. Moreover, we provide an estimate of the convergence rate at the incompressible limit. Finally, we study a multi-species system. In particular, we account for phenotypic heterogeneity, including a structured variable into the problem. In this case, a cross-(degenerate)-diffusion system describes the evolution of the phenotypic distributions. Adapting methods recently developed in the context of two-species systems, we prove existence of weak solutions and we pass to the incompressible limit. Furthermore, we prove new regularity results on the total pressure, which is related to the total density by a power law of state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.