984 resultados para Power delay profiles


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theoretical E-curve for the laminar flow of non-Newtonian fluids in circular tubes may not be accurate for real tubular systems with diffusion, mechanical vibration, wall roughness, pipe fittings, curves, coils, or corrugated walls. Deviations from the idealized laminar flow reactor (LFR) cannot be well represented using the axial dispersion or the tanks-in-series models of residence time distribution (RTD). In this work, four RTD models derived from non-ideal velocity profiles in segregated tube flow are proposed. They were used to represent the RTD of three tubular systems working with Newtonian and pseudoplastic fluids. Other RTD models were considered for comparison. The proposed models provided good adjustments, and it was possible to determine the active volumes. It is expected that these models can be useful for the analysis of LFR or for the evaluation of continuous thermal processing of viscous foods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present study was to compare performance and physiological responses during arm and leg aerobic power tests of combat duration in male child, cadet and senior judo athletes. Power output and physiological parameters, i.e., peak oxygen uptake ((V)over dotO(2)peak), peak ventilation, peak heart rate, lactate, and rate of perceived exertion, of 7 child (under 15 years: age class U15, 12.7 +/- 1.1 yrs), 10 cadet (U17, 14.9 +/- 0.7 yrs) and 8 senior (+20, 29.3 +/- 9.2 yrs) male judo athletes were assessed during incremental tests of combat duration on an arm crank and a cycle ergometer. Children as well as cadets demonstrated higher upper body relative VO(2)peak than seniors (37.3 +/- 4.9, 39.2 +/- 5.0 and 31.0 +/- 2.1 ml.kg(-1).min(-1), respectively); moreover, upper and lower body relative VO(2)peak decreased with increasing age (r = -0.575, p < 0.003 and r = -0.580, p < 0.002, respectively). Children showed lower blood lactate concentrations after cranking as well as after cycling when compared to seniors (7.8 +/- 2.4 vs. 11.4 +/- 2.1 mmol.l(-1) and 7.9 +/- 3.0 vs. 12.0 +/- 1.9 mmol.l(-1), respectively); furthermore, blood lactate values after cranking increased with age (r = 0.473, p < 0.017). These differences should be considered in planning the training for judo athletes of different age classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The purpose of this study was to investigate the rat skin penetration abilities of two commercially available low-level laser therapy (LLLT) devices during 150 sec of irradiation. Background data: Effective LLLT irradiation typically lasts from 20 sec up to a few minutes, but the LLLT time-profiles for skin penetration of light energy have not yet been investigated. Materials and methods: Sixty-two skin flaps overlaying rat's gastrocnemius muscles were harvested and immediately irradiated with LLLT devices. Irradiation was performed either with a 810 nm, 200mW continuous wave laser, or with a 904 nm, 60mW superpulsed laser, and the amount of penetrating light energy was measured by an optical power meter and registered at seven time points (range, 1-150 sec). Results: With the continuous wave 810nm laser probe in skin contact, the amount of penetrating light energy was stable at similar to 20% (SEM +/- 0.6) of the initial optical output during 150 sec irradiation. However, irradiation with the superpulsed 904 nm, 60mW laser showed a linear increase in penetrating energy from 38% (SEM +/- 1.4) to 58% (SEM +/- 3.5) during 150 sec of exposure. The skin penetration abilities were significantly different (p < 0.01) between the two lasers at all measured time points. Conclusions: LLLT irradiation through rat skin leaves sufficient subdermal light energy to influence pathological processes and tissue repair. The finding that superpulsed 904nm LLLT light energy penetrates 2-3 easier through the rat skin barrier than 810nm continuous wave LLLT, corresponds well with results of LLLT dose analyses in systematic reviews of LLLT in musculoskeletal disorders. This may explain why the differentiation between these laser types has been needed in the clinical dosage recommendations of World Association for Laser Therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabajo realizado por: Packard, T. T., Osma, N., Fernández Urruzola, I., Gómez, M

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass spectrometry-based serum metabolic profiling is a promising tool to analyse complex cancer associated metabolic alterations, which may broaden our pathophysiological understanding of the disease and may function as a source of new cancer-associated biomarkers. Highly standardized serum samples of patients suffering from colon cancer (n = 59) and controls (n = 58) were collected at the University Hospital Leipzig. We based our investigations on amino acid screening profiles using electrospray tandem-mass spectrometry. Metabolic profiles were evaluated using the Analyst 1.4.2 software. General, comparative and equivalence statistics were performed by R 2.12.2. 11 out of 26 serum amino acid concentrations were significantly different between colorectal cancer patients and healthy controls. We found a model including CEA, glycine, and tyrosine as best discriminating and superior to CEA alone with an AUROC of 0.878 (95% CI 0.815-0.941). Our serum metabolic profiling in colon cancer revealed multiple significant disease-associated alterations in the amino acid profile with promising diagnostic power. Further large-scale studies are necessary to elucidate the potential of our model also to discriminate between cancer and potential differential diagnoses. In conclusion, serum glycine and tyrosine in combination with CEA are superior to CEA for the discrimination between colorectal cancer patients and controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to search for differences in the EEG of first-episode, drug-naive patients having a schizophrenic syndrome which presented different time courses in response to antipsychotic treatment. Thirteen patients who fulfilled DSM-IV diagnosis for schizophrenia or schizophreniform disorder participated in this study. Before beginning antipsychotic treatment, the EEG was recorded. On the same day psychopathological ratings were assessed using the ADMDP system, and again after 7 and 28 days of treatment. The resting EEG (19 leads) was subject to spectral analysis involving power values for six frequency bands. The score for the schizophrenic syndrome was used to divide the patients into two groups: those who displayed a clinically meaningful improvement of this syndrome (reduction of more than 30%) after 7 days of treatment (early responders, ER) and those who showed this improvement after 28 days (late responders. LR). Analysis of variance for repeated measures between ER, LR and their matched controls with the 19 EEG leads yielded highly significant differences for the factor group in the alpha2 and beta2 frequency band. No difference was found between the slow-wave frequency bands. Compared to controls the LR group showed significantly higher alpha2 and beta2 power and, in comparison to the ER group, significantly higher alpha2 power. There were no significant differences between the ER and the control group. These findings point to differences in brain physiology between ER and LR. The implications for diagnosis and treatment are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gravity wants to pull an ice sheet to the center of the Earth, but cannot because the Earth's crust is in the way, so ice is pushed out sideways instead. Or is it? The ice sheet "sees" nothing preventing it from spreading out except air, which is much less massive than ice. Therefore, does not ice rush forward to fill this relative vacuum; does not the relative vacuum suck ice into it, because Nature abhors a vacuum? If so, the ice sheet is not only pulled downward by gravity, it is also pulled outward by the relative vacuum. This pulling outward will be most rapid where the ice sheet encounters least resistance. The least resistance exists along the bed of ice streams, where ice-bed coupling is reduced by a basal water layer, especially if the ice stream becomes afloat and the floating part is relatively unconfined around its perimeter and unpinned to the sea floor. Ice streams are therefore fast currents of ice that develop near the margins of an ice sheet where these conditions exist. Because of these conditions, ice streams pull ice out of ice sheets and have pulling power equal to the longitudinal gravitational pulling force multiplied by the ice-stream velocity. These boundary conditions beneath and beyond ice streams can be quantified by a basal buoyancy factor that provides a life-cycle classification of ice streams into inception, growth, mature, declining and terminal stages, during which ice streams disintegrate the ice sheet. Surface profiles of ice streams are diagnostic of the stage in a life cycle and, hence, of the vitality of the ice sheet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. During the last decade mobile communications increasingly became part of people's daily routine. Such usage raises new challenges regarding devices' battery lifetime management when using most popular wireless access technologies, such as IEEE 802.11. This paper investigates the energy/delay trade-off of using an end-user driven power saving approach, when compared with the standard IEEE 802.11 power saving algorithms. The assessment was conducted in a real testbed using an Android mobile phone and high-precision energy measurement hardware. The results show clear energy benefits of employing user-driven power saving techniques, when compared with other standard approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field investigations of the Laptev Sea shoreface morphology were carried out (1) off erosional shores composed of unconsolidated sediments, (2) off the modern delta shores of the Lena River, and (3) off rocky shores. It was found that profiles off erosional shores had a concave shape. This shape is not well described by commonly applied power functions, a feature, which is in disagreement with the generally accepted concept of the equilibrium shape of shoreface profiles. The position of the lower shoreface boundary is determined by the elevation of the coastal lowland inundated during the last transgression (at -5 to -10 m) and may easily be recognized by a sharp, an order of magnitude decrease in the mean inclination of the sea floor. The mean shoreface inclination depends on sediment grain-size and ranges from 0.0022 to 0.033. The concave shape of the shoreface did not change substantially during the last 20-30 years, which indicates that shoreline retreat did not slow down and hence suggests continued intensive coastal erosion in the 21st century. The underwater part of the Lena River delta extends up to 35 km offshore. Its upper part is formed by a shallow and up to 18-km wide bench, which reaches depths of 2-3 m along the outer edge. The evolution of the delta was irregular. Whereas some parts of the delta are advancing rapidly (58 m/year), other parts are eroding. Comparison of measured profiles with older bathymetric data gave an opportunity to evaluate the changes of the underwater delta over past decades. Bathymetric surveys of the seabed around the delta can thus contribute towards a quantification of the sediment budget of the river-sea system. In addition, some sections of the Laptev Sea coast are composed of bedrock that has a comparatively low resistance to wave erosion. These sections may supply a considerable amount of sediment, especially if the cliffs are high. This source must therefore also be taken into account when assessing the contribution of shore erosion to the Laptev Sea sediment budget.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many context-aware applications rely on the knowledge of the position of the user and the surrounding objects to provide advanced, personalized and real-time services. In wide-area deployments, a routing protocol is needed to collect the location information from distant nodes. In this paper, we propose a new source-initiated (on demand) routing protocol for location-aware applications in IEEE 802.15.4 wireless sensor networks. This protocol uses a low power MAC layer to maximize the lifetime of the network while maintaining the communication delay to a low value. Its performance is assessed through experimental tests that show a good trade-off between power consumption and time delay in the localization of a mobile device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a low-power, high-speed 4-data-path 128-point mixed-radix (radix-2 & radix-2 2 ) FFT processor for MB-OFDM Ultra-WideBand (UWB) systems. The processor employs the single-path delay feedback (SDF) pipelined structure for the proposed algorithm, it uses substructure-sharing multiplication units and shift-add structure other than traditional complex multipliers. Furthermore, the word lengths are properly chosen, thus the hardware costs and power consumption of the proposed FFT processor are efficiently reduced. The proposed FFT processor is verified and synthesized by using 0.13 µm CMOS technology with a supply voltage of 1.32 V. The implementation results indicate that the proposed 128-point mixed-radix FFT architecture supports a throughput rate of 1Gsample/s with lower power consumption in comparison to existing 128-point FFT architectures

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method providing the optimal laser intensity profiles for a direct-drive inertial confinement fusion scheme has been developed. The method provides an alternative approach to phase-space optimization studies, which can prove computationally expensive. The method applies to a generic irradiation configuration characterized by an arbitrary number NB of laser beams provided that they irradiate the whole target surface, and thus goes beyond previous analyses limited to symmetric configurations. The calculated laser intensity profiles optimize the illumination of a spherical target. This paper focuses on description of the method, which uses two steps: first, the target irradiation is calculated for initial trial laser intensities, and then in a second step the optimal laser intensities are obtained by correcting the trial intensities using the calculated illumination. A limited number of example applications to direct drive on the Laser MegaJoule (LMJ) are described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.