567 resultados para INTRUSIVE LUXATION
Resumo:
Preserving the cultural heritage of the performing arts raises difficult and sensitive issues, as each performance is unique by nature and the juxtaposition between the performers and the audience cannot be easily recorded. In this paper, we report on an experimental research project to preserve another aspect of the performing arts—the history of their rehearsals. We have specifically designed non-intrusive video recording and on-site documentation techniques to make this process transparent to the creative crew, and have developed a complete workflow to publish the recorded video data and their corresponding meta-data online as Open Data using state-of-the-art audio and video processing to maximize non-linear navigation and hypervideo linking. The resulting open archive is made publicly available to researchers and amateurs alike and offers a unique account of the inner workings of the worlds of theater and opera.
Núcleo Psicótico numa Psicose em Potencial Complexo de Mãe Morta e Psicose Branca: um estudo de caso
Resumo:
Este estudo teve como objetivo principal explorar dois conceitos importantes na teoria do psicanalista francês André Green: complexo de mãe morta e psicose branca. A decatexia provocada pelo afastamento emocional materno (complexo de mãe morta), induz um vazio interior (angústia branca). Esta sensação de vazio, de paragem, uma depressão sem afetos e a alucinação negativa, são manifestações da psicose branca (estrutura matriz onde se observa o núcleo da psicose sem que esta necessariamente se manifeste). O método utilizado foi o estudo de caso, de um sujeito do sexo masculino em regime de internamento. Os instrumentos de avaliação incluem a técnica projetiva de Rorschach (aplicada no início e no final do internamento) e o Thematic Apperception Test (aplicado no início do internamento). Através do material colhido em contexto de acompanhamento individual e das técnicas projetivas observaram-se pontos de contacto entre o conceito de psicose branca de Green, falso self de Winnicott e as personalidades “as if” de Helene Deutsch. A morte metafórica da mãe, o seu afastamento emocional, poderá estar na origem destas perturbações, onde se observa o núcleo da psicose. Nestas situações clínicas, em que o vazio interno predomina, a prática psicoterapêutica requer um posicionamento particular do clínico, que não deve estar nem muito próximo (sentido como intrusivo) nem muito distante (sentido como abandónico) do seu paciente. / This work had the purpose of exploring two important concepts in the theory of the French psychoanalyst André Green: the dead mother complex and the blank psychosis. The decathexis caused by a maternal emotional withdrawal (dead mother complex) induces an internal void (blank anguish). This feeling of emptiness, stoppage, a depression without affects and the negative hallucination are manifestations of blank psychosis (a matrix structure where one can observe the psychotic kernel, even though without having a manifest psychosis). The applied method was the case study, with a young male institutionalized subject, and to support it we used the Thematic Apperception Test (applied in the early phases of treatment) and the Rorschach projective technique (applied at the beginning and ending of the treatment). Through the data collected in the therapeutic sessions and the projective techniques applied, we observed points of contact between blank psychosis, Winnicott`s False Self and Helene Deutsch “as if” personalities. The mother’s metaphorical death, her emotional withdrawal, could be in the genesis of these disturbances, where we can observe the psychotic kernel. When we are dealing with this kind of patient, where the internal void prevails, the psychotherapeutic technique requires a special positioning from the therapist in relation to the patient, that shouldn’t be too close (experienced as intrusive), nor too distant (experienced with feelings of abandonment).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
This thesis examines the impact on child and adolescent psychotherapists within CAMHS of the introduction of routine outcome measures (ROMs) associated with the Children and Young People’s Improving access to Psychological Therapies programme (CYP-IAPT). All CAMHS therapists working within a particular NHS mental health Trust1 were required to trial CYP-IAPT ROMs as part of their everyday clinical practice from October 2013-September 2014. During this period considerable freedom was allowed as to which of the measures each therapist used and at what frequency. In order to assess the impact of CYP-IAPT ROMs on child psychotherapy, I conducted semi-structured interviews with eight psychotherapists within a particular CAMHS partnership within one NHS Trust. Each statement was coded and grouped according to whether it related to initial (generic) assessment, goal setting / monitoring, monitoring on-going progress, therapeutic alliance, or to issues concerning how data might be used or interpreted by managers and commissioners. Analysis of interviews revealed greatest concern about session-by session ROMs, as these are felt to impact most significantly on psychotherapy; therapists felt that session-by-session ROMs do not take account of negative transference relationships, they are overly repetitive and used to reward / punish the therapist. Measures used at assessment and review were viewed as most compatible with psychotherapy, although often experienced as excessively time consuming. The Goal Based Outcome Measure was generally experienced as compatible with psychotherapy so long as goals are formed collaboratively between therapist and young person. There was considerable anxiety about how data may be (mis)used and (mis)interpreted by managers and commissioners, for example to end treatment prematurely, trigger change of therapist in the face of negative ROMs data, or to damage psychotherapy. Use of ROMs for short term and generic work was experienced as less intrusive and contentious.
Resumo:
L’Arctique s’est réchauffé rapidement et il y a urgence d’anticiper les effets que cela pourrait avoir sur les protistes à la base de la chaîne alimentaire. Le phytoplancton de l’Océan Arctique inclut les pico- et nano-eucaryotes (0.45-10 μm diamètre de la cellule) et plusieurs de ceux-ci sont des écotypes retrouvés seulement dans l’Arctique alors que d’autres sont introduits des océans plus méridionaux. Alors que les océans tempérés pénètrent dans l’Arctique, il devient pertinent de savoir si ces communautés microbiennes pourraient être modifiées. L’archipel du Svalbard est une région idéale pour observer la biogéographie des communautés microbiennes sous l’influence de processus polaires et tempérés. Bien qu’ils soient géographiquement proches, les régions côtières entourant le Svalbard sont sujettes à des intrusions alternantes de masses d’eau de l’Arctique et de l’Atlantique en plus des conditions locales. Huit sites ont été échantillonnés en juillet 2013 pour identifier les protistes selon un gradient de profondeur et de masses d’eau autour de l’archipel. En plus des variables océanographiques standards, l’eau a été échantillonnée pour synthétiser des banques d’amplicons ciblant le 18S SSU ARNr et son gène pour ensuite être séquencées à haut débit. Cinq des sites d’étude avaient de faibles concentrations de chlorophylle avec des compositions de communauté post-efflorescence dominée par les dinoflagellés, ciliés, des alvéolés parasites putatifs, chlorophycées et prymnesiophytées. L’intrusion des masses d’eau et les conditions environnementales locales étaient corrélées avec la structure des communautés ; l’origine de la masse d’eau contribuant le plus à la distance phylogénétique des communautés microbiennes. Au sein de trois fjords, de fortes concentrations de chlorophylle sous-entendaient des activités d’efflorescence. Un fjord était dominé par Phaeocystis, un deuxième par un clade arctique identifié comme un Pelagophyceae et un troisième par un assemblage mixte. En général, un signal fort d’écotypes liés à l’Arctique prédominait autour du Svalbard.
Resumo:
Research in ubiquitous and pervasive technologies have made it possible to recognise activities of daily living through non-intrusive sensors. The data captured from these sensors are required to be classified using various machine learning or knowledge driven techniques to infer and recognise activities. The process of discovering the activities and activity-object patterns from the sensors tagged to objects as they are used is critical to recognising the activities. In this paper, we propose a topic model process of discovering activities and activity-object patterns from the interactions of low level state-change sensors. We also develop a recognition and segmentation algorithm to recognise activities and recognise activity boundaries. Experimental results we present validates our framework and shows it is comparable to existing approaches.
Resumo:
The debriefing phase in human patient simulation is considered to be crucial for learning. To ensure good learning conditions, the use of small groups is recommended, which poses a major challenge when the student count is high. The use of large groups may provide an alternative for typical lecture-style education and contribute to a more frequently and repeated training which is considered to be important for achieving simulation competency. The purpose of the present study was to describe nursing students’ experiences obtained during the debriefing conducted in small and large groups with the use of a qualitative descriptive approach. The informants had participated in a human patient simulation situation either in large or small groups. Data was collected through the use of five focus-group interviews and analysed by content analysis. The findings showed that independent of group-size the informants experienced the learning strategies to be unfamiliar and intrusive, and in the large groups to such an extent that learning was hampered. Debriefing was perceived as offering excellent opportunities for transferable learning, and activity, predictability and preparedness were deemed essential. Small groups provided the best learning conditions in that safety and security were ensured, but were perceived as providing limited challenges to accommodate professional requirements as a nurse. Simulation competency as a prerequisite for learning was shown not to be developed isolated in conjunction with simulation, but depends on a systematic effort to build a learning community in the programme in general. The faculty needs to support the students to be conscious and accustomed to learning as a heightened experience of learning out of their comfort zone.
Resumo:
Une méthode optimale pour identifier les zones problématiques de drainage dans les champs de canneberges est d’un intérêt pratique pour les producteurs. Elle peut aider au développement de stratégies visant à améliorer le rendement des cultures. L’objectif de cette étude était de développer une méthodologie non intrusive de diagnostic d’un système de drainage en utilisant l’imagerie du géoradar ou Ground Penetrating Radar (GPR) ayant pour finalité de localiser les zones restrictives au drainage et de lier les scans du GPR à des propriétés du sol. Un système GPR muni d’une antenne monostatique a été utilisé pour acquérir des données dans deux champs de canneberges : un construit sur sol organique et l’autre sur sol minéral. La visualisation en trois dimensions de la stratification du champ a été possible après l’interpolation et l’analyse des faciès. La variabilité spatiale du rendement des cultures et la conductivité hydraulique saturée du sol ont été comparées aux données GPR par deux méthodes : calcul du pourcentage de différence et estimation de l’entropie. La visualisation des données couplée à leur analyse a permis de mettre en évidence la géométrie souterraine et des discontinuités importantes des champs. Les résultats montrent qu’il y a bonne corrélation entre les zones où la couche restrictive est plus superficielle et celle de faible rendement. Le niveau de similarité entre la conductivité hydraulique saturée et la profondeur de la couche restrictive confirme la présence de cette dernière. L’étape suivante a été la reconstruction de l’onde électromagnétique et son ajustement par modélisation inverse. Des informations quantitatives ont été extraites des scans : la permittivité diélectrique, la conductivité électrique et l’épaisseur des strates souterraines. Les permittivités diélectriques modélisées sont concordantes avec celles mesurées in-situ et celles de la littérature. Enfin, en permettant la caractérisation des discontinuités du sous-sol, les zones les plus pertinentes pour l’amélioration du drainage et d’irrigation ont été localisées, afin de maximiser le rendement.
Resumo:
Tämän kandidaatintutkimuksen tarkoituksena on löytää vastaus siihen, miten vahva voi olla DRM-systeemi, ennen kuin kuluttajat eivät enää hyväksy sitä. DRM-systeemejä on monen tasoisia, mutta ne eivät ole soveltuvia sellaisenaan kaikille eri alustoille. Peliteollisuuden digitaalisten käyttöoikeuksien hallintajärjestelmillä on omanlaisensa lainalaisuudet kuin esimerkiksi musiikkiteollisuudella. Lisäksi on olemassa tietty tämän hetkinen hyväksytty DRM:n taso, josta voi olla vaarallista poiketa. Tutkimus on luonteeltaan laadullinen tutkimus. Työssä on sovellettu sekä diskurssi- että sisällönanalyysin oppeja. Tutkimuksen aineistona on erilaisten viestiketjujen tekstit, joiden pohjalta pyritään löytämään vastaus tutkimuskysymykseen. Ketjut on jaettu eri vahvuisiksi sen perusteella, miten vahva on DRM:ää koskeva uutinen, jonka pohjalta viestiketju on syntynyt. Koska aineisto on puhuttua kieltä ja sillä on aina oma merkityksensä kontekstissaan, ovat valitut menetelmät soveltuvia analysoimaan aineistoa. Eri ketjujen analyysien tuloksien pohjalta voidaan sanoa, että DRM ei voi olla sitä tasoa suurempi kuin mikä on sen hetkinen vallitseva taso. Jos tästä tasosta poiketaan pikkaisenkin, voi se aiheuttaa suurta närästystä kuluttajien keskuudessa, jopa siihen saakka, että yritys menettää tuloja. Sen hetkiseen tasoon on päästy erinäisten kokeilujen kautta, joista kuluttajat ovat kärsineet, joten he eivät suosiolla hyväksy yhtään sen suurempaa tasoa kuin mikä vallitsee sillä hetkellä. Jos yritys näkee, että tasoa on pakko tiukentaa, täytyy tiukennus tehdä pikkuhiljaa ja naamioida se lisäominaisuuksilla. Kuluttajat ovat tietoisia omista oikeuksistaan, eivätkä he helpolla halua luopua niistä yhtään sen enempää kuin on tarpeellista.
Resumo:
Numerous studies of the dual-mode scramjet isolator, a critical component in preventing inlet unstart and/or vehicle loss by containing a collection of flow disturbances called a shock train, have been performed since the dual-mode propulsion cycle was introduced in the 1960s. Low momentum corner flow and other three-dimensional effects inherent to rectangular isolators have, however, been largely ignored in experimental studies of the boundary layer separation driven isolator shock train dynamics. Furthermore, the use of two dimensional diagnostic techniques in past works, be it single-perspective line-of-sight schlieren/shadowgraphy or single axis wall pressure measurements, have been unable to resolve the three-dimensional flow features inside the rectangular isolator. These flow characteristics need to be thoroughly understood if robust dual-mode scramjet designs are to be fielded. The work presented in this thesis is focused on experimentally analyzing shock train/boundary layer interactions from multiple perspectives in aspect ratio 1.0, 3.0, and 6.0 rectangular isolators with inflow Mach numbers ranging from 2.4 to 2.7. Secondary steady-state Computational Fluid Dynamics studies are performed to compare to the experimental results and to provide additional perspectives of the flow field. Specific issues that remain unresolved after decades of isolator shock train studies that are addressed in this work include the three-dimensional formation of the isolator shock train front, the spatial and temporal low momentum corner flow separation scales, the transient behavior of shock train/boundary layer interaction at specific coordinates along the isolator's lateral axis, and effects of the rectangular geometry on semi-empirical relations for shock train length prediction. A novel multiplane shadowgraph technique is developed to resolve the structure of the shock train along both the minor and major duct axis simultaneously. It is shown that the shock train front is of a hybrid oblique/normal nature. Initial low momentum corner flow separation spawns the formation of oblique shock planes which interact and proceed toward the center flow region, becoming more normal in the process. The hybrid structure becomes more two-dimensional as aspect ratio is increased but corner flow separation precedes center flow separation on the order of 1 duct height for all aspect ratios considered. Additional instantaneous oil flow surface visualization shows the symmetry of the three-dimensional shock train front around the lower wall centerline. Quantitative synthetic schlieren visualization shows the density gradient magnitude approximately double between the corner oblique and center flow normal structures. Fast response pressure measurements acquired near the corner region of the duct show preliminary separation in the outer regions preceding centerline separation on the order of 2 seconds. Non-intrusive Focusing Schlieren Deflectometry Velocimeter measurements reveal that both shock train oscillation frequency and velocity component decrease as measurements are taken away from centerline and towards the side-wall region, along with confirming the more two dimensional shock train front approximation for higher aspect ratios. An updated modification to Waltrup \& Billig's original semi-empirical shock train length relation for circular ducts based on centerline pressure measurements is introduced to account for rectangular isolator aspect ratio, upstream corner separation length scale, and major- and minor-axis boundary layer momentum thickness asymmetry. The latter is derived both experimentally and computationally and it is shown that the major-axis (side-wall) boundary layer has lower momentum thickness compared to the minor-axis (nozzle bounded) boundary layer, making it more separable. Furthermore, it is shown that the updated correlation drastically improves shock train length prediction capabilities in higher aspect ratio isolators. This thesis suggests that performance analysis of rectangular confined supersonic flow fields can no longer be based on observations and measurements obtained along a single axis alone. Knowledge gained by the work performed in this study will allow for the development of more robust shock train leading edge detection techniques and isolator designs which can greatly mitigate the risk of inlet unstart and/or vehicle loss in flight.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
New methods of nuclear fuel and cladding characterization must be developed and implemented to enhance the safety and reliability of nuclear power plants. One class of such advanced methods is aimed at the characterization of fuel performance by performing minimally intrusive in-core, real time measurements on nuclear fuel on the nanometer scale. Nuclear power plants depend on instrumentation and control systems for monitoring, control and protection. Traditionally, methods for fuel characterization under irradiation are performed using a “cook and look” method. These methods are very expensive and labor-intensive since they require removal, inspection and return of irradiated samples for each measurement. Such fuel cladding inspection methods investigate oxide layer thickness, wear, dimensional changes, ovality, nuclear fuel growth and nuclear fuel defect identification. These methods are also not suitable for all commercial nuclear power applications as they are not always available to the operator when needed. Additionally, such techniques often provide limited data and may exacerbate the phenomena being investigated. This thesis investigates a novel, nanostructured sensor based on a photonic crystal design that is implemented in a nuclear reactor environment. The aim of this work is to produce an in-situ radiation-tolerant sensor capable of measuring the deformation of a nuclear material during nuclear reactor operations. The sensor was fabricated on the surface of nuclear reactor materials (specifically, steel and zirconium based alloys). Charged-particle and mixed-field irradiations were both performed on a newly-developed “pelletron” beamline at Idaho State University's Research and Innovation in Science and Engineering (RISE) complex and at the University of Maryland's 250 kW Training Reactor (MUTR). The sensors were irradiated to 6 different fluences (ranging from 1 to 100 dpa), followed by intensive characterization using focused ion beam (FIB), transmission electron microscopy (TEM) and scanning electron microscopy (SEM) to investigate the physical deformation and microstructural changes between different fluence levels, to provide high-resolution information regarding the material performance. Computer modeling (SRIM/TRIM) was employed to simulate damage to the sensor as well as to provide significant information concerning the penetration depth of the ions into the material.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.
Resumo:
Internally-grooved refrigeration tubes maximize tube-side evaporative heat transfer rates and have been identified as a most promising technology for integration into compact cold plates. Unfortunately, the absence of phenomenological insights and physical models hinders the extrapolation of grooved-tube performance to new applications. The success of regime-based heat transfer correlations for smooth tubes has motivated the current effort to explore the relationship between flow regimes and enhanced heat transfer in internally-grooved tubes. In this thesis, a detailed analysis of smooth and internally-grooved tube data reveals that performance improvement in internally-grooved tubes at low-to-intermediate mass flux is a result of early flow regime transition. Based on this analysis, a new flow regime map and corresponding heat transfer coefficient correlation, which account for the increased wetted angle, turbulence, and Gregorig effects unique to internally-grooved tubes, were developed. A two-phase test facility was designed and fabricated to validate the newly-developed flow regime map and regime-based heat transfer coefficient correlation. As part of this setup, a non-intrusive optical technique was developed to study the dynamic nature of two-phase flows. It was found that different flow regimes result in unique temporally varying film thickness profiles. Using these profiles, quantitative flow regime identification measures were developed, including the ability to explain and quantify the more subtle transitions that exist between dominant flow regimes. Flow regime data, based on the newly-developed method, and heat transfer coefficient data, using infrared thermography, were collected for two-phase HFE-7100 flow in horizontal 2.62mm - 8.84mm diameter smooth and internally-grooved tubes with mass fluxes from 25-300 kg/m²s, heat fluxes from 4-56 kW/m², and vapor qualities approaching 1. In total, over 6500 combined data points for the adiabatic and diabatic smooth and internally-grooved tubes were acquired. Based on results from the experiments and a reinterpretation of data from independent researchers, it was established that heat transfer enhancement in internally-grooved tubes at low-to-intermediate mass flux is primarily due to early flow regime transition to Annular flow. The regime-based heat transfer coefficient outperformed empirical correlations from the literature, with mean and absolute deviations of 4.0% and 32% for the full range of data collected.