836 resultados para Overhead conductors
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
Resumo:
In this paper, we describe a decentralized privacy-preserving protocol for securely casting trust ratings in distributed reputation systems. Our protocol allows n participants to cast their votes in a way that preserves the privacy of individual values against both internal and external attacks. The protocol is coupled with an extensive theoretical analysis in which we formally prove that our protocol is resistant to collusion against as many as n-1 corrupted nodes in the semi-honest model. The behavior of our protocol is tested in a real P2P network by measuring its communication delay and processing overhead. The experimental results uncover the advantages of our protocol over previous works in the area; without sacrificing security, our decentralized protocol is shown to be almost one order of magnitude faster than the previous best protocol for providing anonymous feedback.
Resumo:
This paper introduces the LiDAR compass, a bounded and extremely lightweight heading estimation technique that combines a two-dimensional laser scanner and axis maps, which represent the orientations of flat surfaces in the environment. Although suitable for a variety of indoor and outdoor environments, the LiDAR compass is especially useful for embedded and real-time applications requiring low computational overhead. For example, when combined with a sensor that can measure translation (e.g., wheel encoders) the LiDAR compass can be used to yield accurate, lightweight, and very easily implementable localization that requires no prior mapping phase. The utility of using the LiDAR compass as part of a localization algorithm was tested on a widely-available open-source data set, an indoor environment, and a larger-scale outdoor environment. In all cases, it was shown that the growth in heading error was bounded, which significantly reduced the position error to less than 1% of the distance travelled.
Resumo:
Multi-frequency Eddy Current (EC) inspection with a transmit-receive probe (two horizontally offset coils) is used to monitor the Pressure Tube (PT) to Calandria Tube (CT) gap of CANDU® fuel channels. Accurate gap measurements are crucial to ensure fitness of service; however, variations in probe liftoff, PT electrical resistivity, and PT wall thickness can generate systematic measurement errors. Validated mathematical models of the EC probe are very useful for data interpretation, and may improve the gap measurement under inspection conditions where these parameters vary. As a first step, exact solutions for the electromagnetic response of a transmit-receive coil pair situated above two parallel plates separated by an air gap were developed. This model was validated against experimental data with flat-plate samples. Finite element method models revealed that this geometrical approximation could not accurately match experimental data with real tubes, so analytical solutions for the probe in a double-walled pipe (the CANDU® fuel channel geometry) were generated using the Second-Order Vector Potential (SOVP) formalism. All electromagnetic coupling coefficients arising from the probe, and the layered conductors were determined and substituted into Kirchhoff’s circuit equations for the calculation of the pickup coil signal. The flat-plate model was used as a basis for an Inverse Algorithm (IA) to simultaneously extract the relevant experimental parameters from EC data. The IA was validated over a large range of second layer plate resistivities (1.7 to 174 µΩ∙cm), plate wall thickness (~1 to 4.9 mm), probe liftoff (~2 mm to 8 mm), and plate-to plate gap (~0 mm to 13 mm). The IA achieved a relative error of less than 6% for the extracted FP resistivity and an accuracy of ±0.1 mm for the LO measurement. The IA was able to achieve a plate gap measurement with an accuracy of less than ±0.7 mm error over a ~2.4 mm to 7.5 mm probe liftoff and ±0.3 mm at nominal liftoff (2.42±0.05 mm), providing confidence in the general validity of the algorithm. This demonstrates the potential of using an analytical model to extract variable parameters that may affect the gap measurement accuracy.
Resumo:
This laboratory session provides hands-on experience for students to visualize the beating human heart with ultrasound imaging. Simple views are obtained from which students can directly measure important cardiac dimensions in systole and diastole. This allows students to derive, from first principles, important measures of cardiac function, such as stroke volume, ejection fraction, and cardiac output. By repeating the measurements from a subject after a brief exercise period, an increase in stroke volume and ejection fraction are easily demonstrable, potentially with or without an increase in left ventricular end-diastolic volume (which indicates preload). Thus, factors that affect cardiac performance can readily be discussed. This activity may be performed as a practical demonstration and visualized using an overhead projector or networked computers, concentrating on using the ultrasound images to teach basic physiological principles. This has proved to be highly popular with students, who reported a significant improvement in their understanding of Frank-Starling's law of the heart with ultrasound imaging.
Resumo:
Na0.5Bi0.5TiO3 (NBT) is a well-known lead-free piezoelectric material with potential to replace lead zirconate titanate (PZT),1 however high leakage conductivity for the material has been widely reported.2 Through a combination of Impedance Spectroscopy (IS), O2- ion transference (EMF) number experiments and O18 tracer diffusion measurements, combined with Time-of-flight Secondary Ion Mass Spectrometry (TOFSIMS), it was identified that this leakage conductivity was due to oxygen ion conductivity. The volatilization of bismuth during synthesis, causing oxygen vacancies, is believed to be responsible for the leakage conductivity.3 The oxide-ion conductivity, when doped with magnesium, exceeds that of yttria-stabilized zirconia (YSZ) at ~500 °C,3 making it a potential electrolyte material for Intermediate Temperature Solid Oxide Cells (ITSOCs). Figure 1 shows the comparison of bulk oxide ion conductivity between 2 at.% Mg-doped NBT and other known oxide ion conductors.
As part of the UK wide £5.7m 4CU project, research has concentrated on trying to develop NBT for use in Intermediate Temperature Solid Oxide Cells (ITSOCS). With the aim of achieving mixed ionic and electronic conduction, transition metals were chemically doped on to the Ti-site. A range of experimental techniques was used to characterize the materials aimed at investigating both conductivity and material structure (Scanning Electron Microscopy (SEM), IS, X-ray Photoelectron Spectroscopy (XPS) and X-ray Absorption Spectroscopy (XAS)). The potential for NBT as an ITSOC material, as well as the challenges of developing the material, will be discussed.
(1) Takenaka T. et al. Jpn. J. Appl. Phys 1999, 30, 2236.
(2) Hiruma Y. et al. J. Appl. Phys 2009, 105, 084112.
(3) Li. M. et al. Nature Materials 2013, 13, 31.
Resumo:
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
Resumo:
We study a multiuser multicarrier downlink communication system in which the base station (BS) employs a large number of antennas. By assuming frequency-division duplex operation, we provide a beam domain channel model as the number of BS antennas grows asymptotically large. With this model, we first derive a closed-form upper bound on the achievable ergodic sum-rate before developing necessary conditions to asymptotically maximize the upper bound, with only statistical channel state information at the BS. Inspired by these conditions, we propose a beam division multiple access (BDMA) transmission scheme, where the BS communicates with users via different beams. For BDMA transmission, we design user scheduling to select users within non-overlapping beams, work out an optimal pilot design under a minimum mean square error criterion, and provide optimal pilot sequences by utilizing the Zadoff-Chu sequences. The proposed BDMA scheme reduces significantly the pilot overhead, as well as, the processing complexity at transceivers. Simulations demonstrate the high spectral efficiency of BDMA transmission and the advantages in the bit error rate performance of the proposed pilot sequences.
Resumo:
Combining intrinsically conducting polymers with carbon nanotubes (CNT) helps in creating composites with superior electrical and thermal characteristics. These composites are capable of replacing metals and semiconductors as they possess unique combination of electrical conductivity, flexibility, stretchability, softness and bio-compatibility. Their potential for use in various organic devices such as super capacitors, printable conductors, optoelectronic devices, sensors, actuators, electrochemical devices, electromagnetic interference shielding, field effect transistors, LEDs, thermoelectrics etc. makes them excellent substitutes for present day semiconductors.However, many of these potential applications have not been fully exploited because of various open–ended challenges. Composites meant for use in organic devices require highly stable conductivity for the longevity of the devices. CNT when incorporated at specific proportions, and with special methods contributes quite positively to this end.The increasing demand for energy and depleting fossil fuel reserves has broadened the scope for research into alternative energy sources. A unique and efficient method for harnessing energy is thermoelectric energy conversion method. Here, heat is converted directly into electricity using a class of materials known as thermoelectric materials. Though polymers have low electrical conductivity and thermo power, their low thermal conductivity favours use as a thermoelectric material. The thermally disconnected, but electrically connected carrier pathways in CNT/Polymer composites can satisfy the so-called “phonon-glass/electron-crystal” property required for thermoelectric materials. Strain sensing is commonly used for monitoring in engineering, medicine, space or ocean research. Polymeric composites are ideal candidates for the manufacture of strain sensors. Conducting elastomeric composites containing CNT are widely used for this application. These CNT/Polymer composites offer resistance change over a large strain range due to the low Young‟s modulus and higher elasticity. They are also capable of covering surfaces with arbitrary curvatures.Due to the high operating frequency and bandwidth of electronic equipments electromagnetic interference (EMI) has attained the tag of an „environmental pollutant‟, affecting other electronic devices as well as living organisms. Among the EMI shielding materials, polymer composites based on carbon nanotubes show great promise. High strength and stiffness, extremely high aspect ratio, and good electrical conductivity of CNT make it a filler of choice for shielding applications. A method for better dispersion, orientation and connectivity of the CNT in polymer matrix is required to enhance conductivity and EMI shielding. This thesis presents a detailed study on the synthesis of functionalised multiwalled carbon nanotube/polyaniline composites and their application in electronic devices. The major areas focused include DC conductivity retention at high temperature, thermoelectric, strain sensing and electromagnetic interference shielding properties, thermogravimetric, dynamic mechanical and tensile analysis in addition to structural and morphological studies.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
O presente relatório visa a obtenção do grau de Mestre em Serviço Social pelo Instituto Superior Miguel Torga. Motivada pela preocupação nascida na prática profissional quotidiana, no acompanhamento aos cidadãos beneficiários do Rendimento Social de Inserção (R.S.I.), procurámos compreender a relação entre a pobreza e o mercado de trabalho. Criado no ano de 1996 pela Lei nº 19-A/96 de 29 de Junho, como prestação de rendimento mínimo garantido, o R.S.I. tem introduzido formas cada vez mais apuradas de seleção das suas clientelas, quer através da redefinição do conceito de agregado e avaliação dos seus rendimentos, quer da contratualização da prestação, pelo reforço crescente das penalizações ao incumprimento relativo ao emprego e à formação. Objetivo geral é perceber como se efetiva o processo de colocação no mercado de trabalho e que oportunidades de (des)inserção social dai resultam para os beneficiários. Procurámos também analisar a configuração das propostas oferecidas, no âmbito do contrato de inserção, aos homens e às mulheres, aos “velhos” e “novos” pobres. Assim, foram efetuadas entrevistas exploratórias aos técnicos do Centro de Emprego e Formação Profissional Entre Douro e Vouga (CEFP-EDV); Gabinete de Inserção Profissional (GIP) e Núcleo Local de Inserção (NLI) de Stª Mª da Feira e aplicado um inquérito por questionário aos beneficiários do R.S.I. com Contrato de Inserção para colocação no mercado de trabalho. Os técnicos do CEFP-EDV e do GIP expressaram dificuldades no acompanhamento e gestão de carreira dos beneficiários, devido à sobrecarga e à natureza burocrática das tarefas exigidas nos seus organismos. Os beneficiários consideram igualmente que o CEFP-EDV não é eficiente nem eficaz, para a colocação no mercado de trabalho não cumprindo portanto a função que legalmente lhe está atribuída. Os homens são amplamente beneficiados na relação com o Centro de Emprego, comparativamente com as mulheres, porque recebem mais propostas de emprego e formação. A “velha pobreza” aparece instalada no desemprego e na prestação durante mais tempo que os “novos” pobres. Estes raramente são convocados pelo CEFP-EDV. São as redes informais que têm um papel mais ativo e preponderante no processo de inserção laboral. Na população inquirida a inserção pelo trabalho por si só não constituiu a solução para a saída da pobreza. / The present report aims to obtain a Master’s degree in Social Work from the Instituto Superior Miguel Torga. Motivated by concern aroused from the day to day professional practice, while monitoring citizens on social income benefits, “Rendimento Social de Inserção” (R.S.I.) (Social Insertion Income), we have sought to understand the relationship between poverty and the labour market. Introduced in the year 1996 by Law nº 19-A/96 of 29th June, as the provision of income support, the R.S.I. has introduced increasingly more refined forms of selection of its clientele, either by redefining the concept of aggregate and assessmento of their income, or the contractual provision, by increasingly strenthening the penalties for failure in relation to employment and training.The overall goal is to understand how the process of entering the work market is made and the opportunities of (un)inclusion arising from it for the beneficiaries. It was also sought to analyse the configuration of the proposals offered under the insertion contract to men and women, to the “old” and the “new” poor.Consequently, exploratory interviews were made to the technicians of Centro de Emprego e Formação Profissional Entre Douro e Vouga (CEFP-EDV) (Emloyment and Training Centre); Gabinete de Inserção Profissional (GIP) (Professional Insertion Office) and Núcleo Local de Inserção (Local Insertion Group) (NLI) of Stª Mª da Feira. The survey was carried out through a questionnaire to the beneficiaries of the R.S.I. with Insertion Contracts for placement in the labour market.The CEFP-EDV and GIP technicians expressed diffficulties in monitoring and career management of the benefeciaries due to overhead and bureaucratic nature of the tasks recquired in their institutions. The beneficiaries also considered that CEFP-EDV is neither efficient nor effective in placing people in the work market thus not fulfilling the function for which they are legally assigned. Men are largely benefitted in relation to the Job Centre, compared to women, because they get more job offers and training. The “old poverty” appears to be installed in unemployment and provision for longer than the “new” poor. These are rarely called up by CEFP-EDV. It is the informal networks that have a more active and leading role in the process of job placement. For the questioned population entering the job market does not itself constitute a solution to ending poverty.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08