27 resultados para Objective function values

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work deals with an on-line control strategy based on Robust Model Predictive Control (RMPC) technique applied in a real coupled tanks system. This process consists of two coupled tanks and a pump to feed the liquid to the system. The control objective (regulator problem) is to keep the tanks levels in the considered operation point even in the presence of disturbance. The RMPC is a technique that allows explicit incorporation of the plant uncertainty in the problem formulation. The goal is to design, at each time step, a state-feedback control law that minimizes a 'worst-case' infinite horizon objective function, subject to constraint in the control. The existence of a feedback control law satisfying the input constraints is reduced to a convex optimization over linear matrix inequalities (LMIs) problem. It is shown in this work that for the plant uncertainty described by the polytope, the feasible receding horizon state feedback control design is robustly stabilizing. The software implementation of the RMPC is made using Scilab, and its communication with Coupled Tanks Systems is done through the OLE for Process Control (OPC) industrial protocol

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work shows a study about the Generalized Predictive Controllers with Restrictions and their implementation in physical plants. Three types of restrictions will be discussed: restrictions in the variation rate of the signal control, restrictions in the amplitude of the signal control and restrictions in the amplitude of the Out signal (plant response). At the predictive control, the control law is obtained by the minimization of an objective function. To consider the restrictions, this minimization of the objective function is done by the use of a method to solve optimizing problems with restrictions. The chosen method was the Rosen Algorithm (based on the Gradient-projection). The physical plants in this study are two didactical systems of water level control. The first order one (a simple tank) and another of second order, which is formed by two tanks connected in cascade. The codes are implemented in C++ language and the communication with the system to be done through using a data acquisition panel offered by the system producer

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N  M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The gravity inversion method is a mathematic process that can be used to estimate the basement relief of a sedimentary basin. However, the inverse problem in potential-field methods has neither a unique nor a stable solution, so additional information (other than gravity measurements) must be supplied by the interpreter to transform this problem into a well-posed one. This dissertation presents the application of a gravity inversion method to estimate the basement relief of the onshore Potiguar Basin. The density contrast between sediments and basament is assumed to be known and constant. The proposed methodology consists of discretizing the sedimentary layer into a grid of rectangular juxtaposed prisms whose thicknesses correspond to the depth to basement which is the parameter to be estimated. To stabilize the inversion I introduce constraints in accordance with the known geologic information. The method minimizes an objective function of the model that requires not only the model to be smooth and close to the seismic-derived model, which is used as a reference model, but also to honor well-log constraints. The latter are introduced through the use of logarithmic barrier terms in the objective function. The inversion process was applied in order to simulate different phases during the exploration development of a basin. The methodology consisted in applying the gravity inversion in distinct scenarios: the first one used only gravity data and a plain reference model; the second scenario was divided in two cases, we incorporated either borehole logs information or seismic model into the process. Finally I incorporated the basement depth generated by seismic interpretation into the inversion as a reference model and imposed depth constraint from boreholes using the primal logarithmic barrier method. As a result, the estimation of the basement relief in every scenario has satisfactorily reproduced the basin framework, and the incorporation of the constraints led to improve depth basement definition. The joint use of surface gravity data, seismic imaging and borehole logging information makes the process more robust and allows an improvement in the estimate, providing a result closer to the actual basement relief. In addition, I would like to remark that the result obtained in the first scenario already has provided a very coherent basement relief when compared to the known basin framework. This is significant information, when comparing the differences in the costs and environment impact related to gravimetric and seismic surveys and also the well drillings

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The modern industrial progress has been contaminating water with phenolic compounds. These are toxic and carcinogenic substances and it is essential to reduce its concentration in water to a tolerable one, determined by CONAMA, in order to protect the living organisms. In this context, this work focuses on the treatment and characterization of catalysts derived from the bio-coal, by-product of biomass pyrolysis (avelós and wood dust) as well as its evaluation in the phenol photocatalytic degradation reaction. Assays were carried out in a slurry bed reactor, which enables instantaneous measurements of temperature, pH and dissolved oxygen. The experiments were performed in the following operating conditions: temperature of 50 °C, oxygen flow equals to 410 mL min-1 , volume of reagent solution equals to 3.2 L, 400 W UV lamp, at 1 atm pressure, with a 2 hours run. The parameters evaluated were the pH (3.0, 6.9 and 10.7), initial concentration of commercial phenol (250, 500 and 1000 ppm), catalyst concentration (0, 1, 2, and 3 g L-1 ), nature of the catalyst (activated avelós carbon washed with dichloromethane, CAADCM, and CMADCM, activated dust wood carbon washed with dichloromethane). The results of XRF, XRD and BET confirmed the presence of iron and potassium in satisfactory amounts to the CAADCM catalyst and on a reduced amount to CMADCM catalyst, and also the surface area increase of the materials after a chemical and physical activation. The phenol degradation curves indicate that pH has a significant effect on the phenol conversion, showing better results for lowers pH. The optimum concentration of catalyst is observed equals to 1 g L-1 , and the increase of the initial phenol concentration exerts a negative influence in the reaction execution. It was also observed positive effect of the presence of iron and potassium in the catalyst structure: betters conversions were observed for tests conducted with the catalyst CAADCM compared to CMADCM catalyst under the same conditions. The higher conversion was achieved for the test carried out at acid pH (3.0) with an initial concentration of phenol at 250 ppm catalyst in the presence of CAADCM at 1 g L-1 . The liquid samples taken every 15 minutes were analyzed by liquid chromatography identifying and quantifying hydroquinone, p-benzoquinone, catechol and maleic acid. Finally, a reaction mechanism is proposed, cogitating the phenol is transformed into the homogeneous phase and the others react on the catalyst surface. Applying the model of Langmuir-Hinshelwood along with a mass balance it was obtained a system of differential equations that were solved using the Runge-Kutta 4th order method associated with a optimization routine called SWARM (particle swarm) aiming to minimize the least square objective function for obtaining the kinetic and adsorption parameters. Related to the kinetic rate constant, it was obtained a magnitude of 10-3 for the phenol degradation, 10-4 to 10-2 for forming the acids, 10-6 to 10-9 for the mineralization of quinones (hydroquinone, p-benzoquinone and catechol), 10-3 to 10-2 for the mineralization of acids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine the clinical maternal and neonatal outcomes in HELLP syndrome patients treated with dexamethasone who either developed renal injury or renal insufficiency and to identify predictive values of urea and creatinine for the identification of subjects with HELLP syndrome at risk of developing renal insufficiency. Methods: Non-radomized intervention study of dexamethasone use in HELLP syndrome. A total of 62 patients were enrolled at Maternidade Escola Januário Cicco (MEJC). Patients received a total of 30 mg of dexamethasone IV, in three doses of 10 mg every 12 hours. A clinical and laboratory follow up were performed at 24, 48 and 72 hours. Patients were followed up to 6 months after delivery. Patients were grouped in accordance to renal function, i.e, normal and some type of renal lesion. Renal lesion was considered when creatinine was equal or greater than 1.3 mg/dl and diuresis less than 100 ml in 4 hours period and renal insufficiency was defined when dialysis was needed. Results: A total of 1230 patients with preeclampsia were admitted at MEJC. Of those 62 (5%) developed HELLP syndrome. There was no statistical difference in the groups with renal involvement or normal renal function with respect to the demographics, type of anesthesia used and delivery, and weight of the newborn. An improvement in the AST, ALT, LDH, haptoglobine, antithrombine, fibrinogenen and platelets was observed within 72 hours after dexamethosone use. There was a significant increase in the diuresis within the interval of 6 hours before the delivery and 24 hours after it. Of the 62 patients, 46 (74. 2%) had normal renal function and 16 (25.8%) evolved with renal lesion, with 5 (8.1%) needing dialysis. These 5 patients who received dialysis recovered the xi renal function. The delay in administering dexamethasone increased in 4.6% the risk of development of renal insufficiency. Patients with renal insufficiency had received significantly more blood products than subjects without renal lesion (p=0.03). Diuresis, leukocytes, uric acid, urea, creatinine were significantly different between the groups with normal renal function, renal lesion and renal insufficiency. The levels of creatinine 1.2mg/dl and uric acid 51mg/dl, at admission are predictive of subjects who will evolve with renal lesion (p<0.001). Maternal mortality was 3.2%. None of the subjects with renal insufficiency evolved with chronic renal disease. Conclusions: Dexamethasone in patients with HELLP syndrome seems to reduce significantly the hepatic microthrombosis and normalize hemostasis as seen by improvement of liver function. Renal injury can be considered, in HELLP syndrome, when creatinine levels are greater than 1.3 mg/dl and diuresis less than 100 ml/h in interval of 4 hours. The level of creatinine greater than 1.2 mg/dl and urea greater than 51mg/dl are predictive of subjects with HELLP syndrome who will develop renal injury. Patients who receive more red cell packs develop renal insufficiency. Finally, the delay in administering dexamethasone increases the risk of developing renal insufficiency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine the prevalence of auditory manifestations in individuals with hypertension and analyze the association between hearing loss, systemic hypertension and quality of life in hypertensive patients. Method: This was a prospective, observational, case-control study, carried out from June 2010 to December 2013 at the University Hospital Onofre Lopes, in Natal, Brazil, which involved 120 patients of both sexes were analyzed with a diagnosis of hypertension and 120 patients without a diagnosis of hypertension. The audiological function was assessed by tonal and vocal audiometry. The quality of life was defines by the MINICHAL BRASIL questionnaire. Results: The prevalence of hearing loss was high in both groups (82.5 % and 75.8 %, in hypertension group and control, respectively, p=0.003). The sensorineural was the most common type of hearing loss (48.5 %) in hypertension group while conductive hearing loss was predominant (61.5 %) in the control group. There were no difference in the intensity of hearing loss between the groups (p=0,21). The main hearing complaint was hearing loss (51 %), followed by ear pain (14 %). There was worse quality of life in hypertensive individuals with hearing loss (p= 0.0001). Conclusion: Hypertensive individuals showed higher prevalence of auditory events, including hearing loss, sensorineural hearing loss is predominant . Hearing loss is associated with worse quality of life of hypertensive individuals even when these pressure values are within normal limits

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inflammation has been pointed out as an important factor in development of chronic diseases, as diabetes. Hyperglycemia condition would be responsible by toll-like receptors, TLR2 and TLR4, and, consequently by local and systemic inflammation induction. Thus, the objective of present study was to evaluate type 1 Diabetes mellitus (T1DM) pro-inflammatory state through mRNA expression of TLRs 2 and 4 and proinflammatory cytokines IL-1β, IL-6 and TNF-α correlating to diabetic nephropathy. In order to achieve this objective, 76 T1DM patients and 100 normoglycemic (NG) subjects aged between 6 and 20 years were evaluated. T1DM subjects were evaluated as a total group DM1, and considering glycemic control (good glycemic control DM1G, and poor glycemic control DM1P) and considering time of diagnosis (before achieving 5 years of diagnosis DM1< 5yrs, and after achieving 5 years of diagnosis DM1 <5yrs). Metabolic control was evaluated by glucose and glycated hemoglobin concentrations; to assess renal function serum urea, creatinine, albumin, total protein and urinary albumin-to-creatinine ratio were determined and to evaluate hepatic function, AST and ALT serum activities were measured. Pro-inflammatory status was assessed by mRNA expression of TLRs 2 and 4 and the inflammatory cytokines IL-1β, IL-6 and TNF-α. Except for DM1G group (18.4%), DM1NC patients (81.6%) showed a poor glycemic control, with glycated hemoglobin (11,2%) and serum glucose (225,5 md/dL) concentrations significantly increased in relation to NG group (glucose: 76,5mg/dL and glycated hemoglobin: 6,9%). Significantly enhanced values of urea (20%) and ACR (20,8%) and diminished concentrations of albumin (5,7%) and total protein (13,6%) were found in T1DM patients, mainly associated to a poor glycemic control (DM1P increased values of urea: 20% and ACR:49%, and diminished of albumin: 13,6% and total protein:13,6%) and longer disease duration (DM1 <5yrs - increased values of urea: 20% and ACR:20,8%, and diminished of albumin: 14,3% and total protein:13,6%). As regarding pro-inflammatory status evaluation, significantly increased mRNA expressions were presented for TLR2 (37,5%), IL-1β (43%), IL-6 (44,4%) and TNF-α (15,6%) in T1DM patients in comparison to NG, mainly associated to DM1P (poor glycemic control TLR2: 82%, IL-1β: 36,8% increase) and DM1 <5yrs (longer time of diagnosis TLR2: 85,4%, IL-1β: 46,5% increased) groups. Results support the existence of an inflammatory state mediated by an increased expression of TLR2 and pro-inflammatory cytokines IL-1β, IL-6 and TNF-α in T1DM

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work analises the social relationship between Television and the Family though the resignification of individuals about Television messages and the speeches that they make about Family. Firstly, the objective is to understand if the principles, values and beliefs constructed and communicated (repassed) inside the Family filter the messages from the Mass Media. Secondly, if there still exists a family culture able to forge identity against so many cultural exchanges. Thirdly, what the function of this identity in the production of senses is. In session 1 and 2, a general approach about the dissemination of Mass Media in Society and the pertinence of the work is presented. Session 3 is about the method used: a qualitative research, with thirteen families from Natal-RN, situated in the Middle Class. The theorical base is considered in the fourth session where the reference to the evolution of the Family is made, with enphasis on the Middle Class and some theories that analyze the pheomenom of the Mass Media , specially in the second half of twentieth century. In session 5 and 6, the research data is presented and analyzed. Finally, in the last session, as a conclusion it can be said that the value of the Family as emotional support is reforced by the speeches and practices that interfere in the signification procces, singular aspects, as well as the social repertoire constructed per si and by institutions (including the family) moreover, mediative message is assimilated by the receiver and becomes understood inside the learned speeches during the receiver‟s history of life, although these messsages are also components in the construction of these repertoire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work analises the social relationship between Television and the Family though the resignification of individuals about Television messages and the speeches that they make about Family. Firstly, the objective is to understand if the principles, values and beliefs constructed and communicated (repassed) inside the Family filter the messages from the Mass Media. Secondly, if there still exists a family culture able to forge identity against so many cultural exchanges. Thirdly, what the function of this identity in the production of senses is. In session 1 and 2, a general approach about the dissemination of Mass Media in Society and the pertinence of the work is presented. Session 3 is about the method used: a qualitative research, with thirteen families from Natal-RN, situated in the Middle Class. The theorical base is considered in the fourth session where the reference to the evolution of the Family is made, with enphasis on the Middle Class and some theories that analyze the pheomenom of the Mass Media , specially in the second half of twentieth century. In session 5 and 6, the research data is presented and analyzed. Finally, in the last session, as a conclusion it can be said that the value of the Family as emotional support is reforced by the speeches and practices that interfere in the signification procces, singular aspects, as well as the social repertoire constructed per si and by institutions (including the family) moreover, mediative message is assimilated by the receiver and becomes understood inside the learned speeches during the receiver s history of life, although these messsages are also components in the construction of these repertoire