Latest Games :

viernes, 21 de diciembre de 2012 | 0 comentarios

The importance of interaction analysis in CSCL

We know that these collaborative learning environments are characterized by a high degree of interaction with the system user, thereby generating a lot of action events. The action event management is a key issue in applications, since, on the one hand, the analysis of data obtained recogidosu real life online, collaborative learning situations also help important issues in the functioning of the group and collaborative learning process should be further understood that this can guide both the design workspace more functional and software components, as well the development of improved facilities such as awareness, feedback, monitoring space work, evaluation and monitoring of the work of the group by a coordinator, tutor, etc.. Indeed, data filtering, proper management of events allows the establishment of a list of parameters that can be used to analyze the group's activities space (eg tutor-to-group or member to member communication flow, asynchronism in the space group, etc.). These parameters allow the efficiency of the group's activities for better performance and group and individual attitudes of its members in the shared workspace that was predicted.
Furthermore, application design will be necessary for this purpose organize and manage both the resources offered by the system and the users accessing these resources. All this user-and resource-user interaction user generates events or "logs" to be found in the log files and represent the information base for conducting statistical process aimed at obtaining knowledge of the system. This will facilitate collaborative learning process by keeping users abreast of what is happening in the system (for example, the contributions of others, documents created, etc.) and control user behavior in order to provide support (eg, help students who are unable to perform a task on your own). Therefore, the user-user and user interaction is critical resource in any collaborative learning environment
to enable groups of students to communicate with each other and achieve common goals (eg, a classroom activity in collaboration).
Although user interaction is the most important point to be managed in applications, it is usually also important to be able to monitor and control the performance and overall system performance. This allows the administrator to continuously monitor critical parts of the system and act as necessary. Moreover, it adds a layer of security implied what already exists (for example, user habits of control "to detect fraudulent use of the system unauthorized users).
To effectively communicate the knowledge gained from the activity of the users group in terms of knowledge and feedback, CSCL applications should provide full support to the above three aspects are essential in all applications collaboration, namely, coordination, communication and collaboration in order to create virtual environments where students, teachers, tutors, etc. are able to cooperate with each other to achieve a common goal of learning. Coordination involves the organization of the group in order to achieve the objectives set and monitor user activity, which is possible by maintaining awareness of the participants.
The communication relates to the communication medium basically messages between users within and between groups and may be in both synchronous and asynchronous modes. Finally, the partnership allows members of the group share all kinds of resources, which is also found in synchronous and asynchronous modes. Both coordination and collaboration and communication will generate many events that will be communicated to users after these facts have been manipulated and analyzed in order to provide users with as much awareness as possible immediate and constant flow as
possible feedback.
Continue Reading

Trend mobile technology

jueves, 22 de noviembre de 2012 | 0 comentarios


In the past year 2012, the evolution of technology has leapt unexpectedly.
The technology boom of the new generation results in terms of
"Cell phone and Internet" (Smartphone, Tablet, iPhone, etc.) which, rather than being separate realities complement.
However, the development of these types of technology comes a point
wherein converge, and when the network is a global communications
opens and exceeds the expectations of its creators, Internet no longer
exclusively for the military and government, and combined with the services
telephony becomes a social interaction media currently
is present in all areas of daily life.
Today, these technologies are combined in a single, the cellular and
not limited to the function of two people communicate with each other, but now
have evolved to include modalities such as Internet access in almost
all its aspects (data, mp3, teleconference, transmission
photo files and videos, etc..).
This brings countless advantages, accelerate the pace of obtaining
information, facilitates communication, reduces emissions and response times;
ie transforms everyday life into an event technology,
all this tied to economic growth of societies, and beyond
all changes in the natural order of things that the technology generates.
Having seen so many wide and constant changes
that mobile telephony and the Internet have been issued on the global community,
arises in my interest in further informatin on the issues that shape this revolution in our own environment.
To meet the above, in this paper we proposed the development of a software application with mobile computing platform, providing access to information located on a platform database on a Web server, through mobile devices such as cell phones.
The application provides for the registration and monitoring of proprietary information
a pharmaceutical entity, ie the relevant information of the client,
purchasing products and prescription medicines.
This means customers the ability to self manage their comprasen anytime and anywhere without having to physically attend the pharmacy branches and with only the help of a modern cell phone.
Proposed turn developing a web application platform make it accessible on the Intranet, by providing additional features like be: user registration, stock control and drug products counter.
It also proposed the development of a Web site accessible from the Internet,
that appears to be the site of a pharmacy, which contemplates functions
e-commerce such as: customer registration, sale of products and / or drugs
on-line, allowing a customer to make your purchase so virtual.
Continue Reading

sábado, 17 de noviembre de 2012 | 0 comentarios

Differences OLTP vs Data Warehouse

Traditional systems of transactions and data warehousing applications are polar opposites in terms of their design requirements and operating characteristics.
OLTP applications are organized to execute transactions for which they were made, for example, move money between accounts, a charge or credit, a return of inventory, etc.. Furthermore, a data warehouse is organized based on concepts such as: customer invoice, products, etc.
Another difference lies in the number of users. Normally, the number of users of a data warehouse is less than one OLTP. It is common to find that transactional systems are accessed by hundreds of users simultaneously, while only tens Data Warehouse. OLTP systems perform hundreds of transactions per second while a single query of a Data Warehouse can take minutes. Another factor is that transactional systems frequently are smaller in size to the data warehouses, this is because a data warehouse information can consist of several OLTP's.
There are also differences in the design, while an extremely normalized OLTP, of a Data Warehouse tends to be denormalized. The OLTP typically consists of a large number of tables, each with few columns, while in a data warehouse is the lower number of tables, but each of them tends to be greater in number of columns.
The OLTP is continuously updated by operational systems every day, while the Data Warehouse are updated periodically batch.
OLTP structures are very stable, rarely change, while those of Data Warehouses derivatives are constantly changing their evolution. This is because the types of queries to which they are subject are varied and it is impossible to foresee all in advance.
Improving Information Delivery: complete, correct, consistent, timely and accessible. Information that people need, in the time you need it and in the format you need.
Improve Decision Making Process: With more support information are obtained faster decisions, and also the business people acquire greater confidence in their own decisions and those of the rest, and achieved a greater understanding of the impacts of their decisions .
Positive Impact on Business Processes: when people are given access to a better quality of information, the company can achieve on its own:
   · Eliminate delays business processes resulting from incorrect, inconsistent and / or nonexistent.
   · Integrate and optimize business processes through sharing and integrated information sources.
   · Eliminate the production and processing of data that is not used or required as a result of poorly designed applications or no longer used.
Continue Reading

CRM News Software Data: Choosing the Public or Private Cloud

lunes, 12 de noviembre de 2012 | 0 comentarios


Still, the transition to the cloud still leaves some company leaders wondering about the right place to host their customer relationship management data. "The cloud," as experts have come to call the collection of data centers storing corporate information, still has some faults. Sharing data center resources carries the perceived risks of security issues and the real threat of downtime. Companies willing to take on extra risk generally enjoy heightened response times and greater productivity. But media coverage of high profile outages at major data centers has dampened some CIO's enthusiasm for distributed computing.

Therefore, some CRM software vendors have created service packages that include, in their terms, a "private cloud." Like a traditional dedicated server rack, resources are used only by a single client company. However, with help from blade servers, RAID storage arrays, and other tools, vendors promise the ability to ramp up quickly during times of peak demand--without relying on a shared server farm.

According to industry analysts, recessions often force companies to make hard decisions about their vendors and platforms. While some CRM software companies have tentatively extended invitations to the "private cloud," other vendors rely on the scalability and redundancy of the public cloud for their clients' success. Either way, may current CRM software subscribers will spend time over the coming months requesting quotes and exploring the kind of data infrastructure necessary to take their businesses to the next level.
Continue Reading

Real world, real successes

domingo, 4 de noviembre de 2012 | 0 comentarios


IBM is the industry-leading supplier of
grid solutions, services and expertise
to the scientific and technical communities,
as well as to the financial
services sector. Leveraging its considerable
experience in implementing
commercial grids worldwide, IBM has
created targeted grid offerings customized
to meet the unique grid computing
needs of the financial services
industry. IBM Grid Computing is currently
engaged with more than 20
major financial institutions in North
America, Europe and Japan, and
more than 100 organizations
worldwide.
Wachovia worked with IBM and
DataSynapse to enhance the processing
speed of trading analytics in
the financial services company’s fixed
income derivatives group. Before
implementing a grid solution, profit
and loss reports and risk reports took
as long as 15 hours to run; now, grid
solution in place, Wachovia can turn
around mission-critical reports in minutes
on a real-time, intraday basis.
Moreover, trading volume increased
by 400 percent, and the number of
simulations by 2,500 percent. As a
result, the group can book larger,
more exotic and more lucrative trades
with more accurate risk taking.
Continue Reading

Technologies powering the grid 2012

lunes, 29 de octubre de 2012 | 0 comentarios


The workhorses of the IBM grid infrastructure
are the grid engines
desktop PCs, workstations or servers
that run the UNIX, Microsoft
Windows or Linux operating systems
. These compute
resources execute various jobs submitted
to the grid, and have access
to a shared set of storage devices.
The IBM Grid Offering for Risk
Management and Compliance
relies on grid middleware from
DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer
application infrastructure platform
extends applications in real time to
operate in a distributed computing
environment across a virtual pool of
underutilized compute resources.
GridServer application interface modules
allow risk management and
compliance applications and nextgeneration
development of risk management
and compliance application
platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured
and unstructured information across
and beyond the enterprise. Critical to
the grid infrastructure, the software
accelerates risk and compliance analytics
applications that process massive
amounts of data for making
better informed decisions. DB2
Information Integrator provides
transparent access to any data
source, regardless of its location,
type or platform.
Continue Reading

UEFI evolution in time

jueves, 25 de octubre de 2012 | 0 comentarios

Consider briefly the path.

In principle, by 2010 we can recognize the existence of two steps in the boot.

1. The first step is called PI (platform initialization), which includes the start of the underlying platform: the first thing that ran was the starting safety protocol, whether it was positive they proceeded in sequence to start the processor, then the chipset, then the motherboard. Once this was working, they proceeded to load the operating system. Here proposed two ways: loading an operating system transient response, to host applications as web browsers and email clients, and load the operating system of choice, which was previously choosing which device would run the boot (disk, memory card, etc..).

2. After the start, and being raised as the operating system, you may want to interact with this firmware directly. This should be done exclusively by the UEFI interfaces wherever possible. In this section, remember that Windows historically chose not to speak to the BIOS directly, but use the concept of high-speed drivers rather than access the underlying firmware in all modern versions of Windows.

To enter address the underlying issue, say that we can describe, prima facie, four-stroke-cycle main use of a PC:
     on
     Platform Initialization
     Starting the Operating System
     off
Continue Reading

Boot Process

| 0 comentarios

This specifies how hardware invoke startup software or an operating system to start charging. However, the starter no longer depends on an operating system loader as before. UEFI can load multiple operating systems without the need for a charger like NTLDR (Windows) or LILO (Linux). UEFI select the partition with the operating system and loads it from there. For this to happen, both the hardware and the software should be UEFI-compatible.

In the boot process, UEFI has menus that are much more friendly than the old BIOS menus, and allows certain hardware configuration tasks can be done without loading the main operating system. This has nothing to do with the current capabilities of certain machines to load a browser without booting the main OS since they achieve these teams carrying a mini-operating system from the BIOS that allows these tasks. By contrast, with the new specification, not loading a mini-operating system but you select which drivers to load to get it to run a particular application. Thus, the selective loading, may initiate web browsers, email clients, which will start much faster than if you had to throw quite a full OS. These applications are called "missing operating system applications." On the contrary, once the system starts operating and applications are used on a regular basis, these applications are called "applications with this operating system."

To better understand this, we see the following image, which shows the building blocks of a generic or universal PC. In the bottom of the figure, in the lower layer fits the hardware. For the hardware, some portion of it can be accessed from fixed code called firmware, and another portion is accessible directly from the operating system. Here ends what would be called "platform", defined by the dotted line. Then, above this layer is the operating system, and above it, the applications that we use every day:
Continue Reading

What is UEFI?

| 0 comentarios

As I write this, it is carrying out the commercial launch of Windows 8. To accompany this milestone, I decided to write this article versante acceleration technologies to startup and shutdown that is in the new operating system. This paper is a survey and its source websites Intel (r), Microsoft (r) and other technology blogs, and images have been mostly ad-hoc generated for this page, except for Figures 2 and 5.
This paper will attempt to explain how to take advantage of Windows 8 UEFI to achieve very short startup times. UEFI stands for Unified Extensible Firmware Interface. It is a specification developed by Intel, which would replace the old BIOS (Basic Input-Output System). Both the new UEFI BIOS and the aim to make the operating system "understands" the basic hardware, usually handled by a firmware. This new team is starting new components side at the start of the platform (hardware and firmware), which then mounts the OS. Windows 8 supports UEFI boot to achieve very short time.


Continue Reading

miércoles, 24 de octubre de 2012 | 0 comentarios

Improving productivity and efficiency
through a multistage implementation


Financial services firms can take an
existing inefficient infrastructure for
risk management and compliance
and gradually grow it into an integrated,
highly efficient grid system.
As shown in Figure 1, an existing
infrastructure may comprise stove
pipes of legacy applications disparate
islands of applications, tools
and compute and storage resources
with little to no communication among
them. A firm can start by enabling
one application—a simulation application
for credit risk modeling, for
example—to run faster by using grid
middleware to virtualize the compute
and storage resources supporting
that application.
The firm can extend the same solution
to another application, for example,
a simulation application used to
model market risk. Compute and storage
resources for both simulation
applications are virtualized by
extending the layer of grid middleware;
thus both applications can
share processing power, networked
storage and centralized scheduling.
Resiliency is achieved at the application
level through failover built into the
DataSynapse GridServer. If failure
occurs or the need to prioritize particular
analyses arises, one application
can pull unutilized resources that are
supporting the other application. This
process also facilitates communication
and collaboration across functional
areas and applications to provide
a better view of enterprise risk
exposure.
Alternatively, a firm can modernize by
grid-enabling a particular decision
engine. A decision engine, such as
one developed with Fair Isaac’s tools,
can deliver the agility of business
rules and the power of predictive analytic
models while leveraging the
power of the grid to execute decisions
in record time. This approach
guarantees that only the computeintensive
components are gridenabled
while simultaneously migrating
these components to technology
specifically designed for decision
components.
Over time, all applications can
become completely grid-enabled or
can share a common set of gridenabled
decision engines. All compute
and data resources become one
large resource pool for all the applications,
increasing the average utilization
rate of compute resources
from 2 to 50 percent in a heterogeneous
architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,
DataSynapse GridServer automatically
matches application requests
with available resources in the distributed
infrastructure. This real-time brokering
of requests with available
resources enables applications to be
immediately serviced, driving greater
throughput. Application workloads
can be serviced in task units of milliseconds,
thus allowing applications
with run times in seconds to execute
in a mere fraction of a second. This
run-time reduction is crucial as banks
move from online to real-time processing,
which is required for functions
such as credit decisions made
at the point of trade execution.
Additionally, the run time of applications
that require hours to process,
such as end-of-day process and loss
reports on a credit portfolio, can be
reduced to minutes by leveraging this
throughput and resource allocation
strategy.
Continue Reading

| 0 comentarios

IBM Grid Offering for Risk
Management and Compliance


The IBM Grid Offering for Risk
Management and Compliance
provides an efficient, scalable and
standards-based solution to the most
pressing issues facing risk and compliance
managers today. Teaming
with DataSynapse, IBM has created
an integrated technology and service
offering that can help risk managers
implement a grid infrastructure to
more efficiently with the Patriot Act,
the Sarbanes-Oxley Act and NYSE
Rule 92.
The IBM Grid Offering for Risk
Management and Compliance allows
firms to run their existing analytics
applications and tools, whether
custom-built management systems,
best-of-breed commercial applications
or a combination thereof. The
offering’s open, flexible infrastructure
supports a wide range of packaged
and custom analytical applications,
including software from Algorithmics,
Fair Isaac, SunGard, SAS, Moody’s
KMV and many more.
The IBM offering centers around grid
computing, an architectural approach
that enables distributed computing
over the Internet, an intranet, a virtual
private network or some combination
thereof. This approach can help
aggregate disparate IT elements
such as compute resources, data
storage and filing systems to create a
single, unified system and to address
fluctuating application workload
requirements. One clear advantage of
a distributed environment for risk
management operations is the ability
to run several risk scenarios in parallel
to generate an optimal solution as
quickly as possible, without sacrificing
increased accuracy and
performance.
As required by risk and compliance
managers’ analyses, the grid makes
additional compute capacity available
on a full-time or part-time basis. It
helps banks leverage available,
underutilized compute capacity within
their existing IT infrastructures, thus
helping them to reach end results far
more rapidly than within conventional
computer environments. And compared
to a non-grid solution, the
required compute resources are
fewer and easier to manage
contributing to reduced total cost of
ownership (TCO). The IBM offering
also provides high levels of resiliency
at the application level to help guarantee
workload execution.
The IBM offering provides a costeffective
computing model for
acquiring, deploying and managing
resources. Because the existing infrastructure
that supports the business
of managing risk does not need to be
replaced, firms can leverage their
existing infrastructure and application
investments, optimizing the efficiency
of their risk management business
and applications while migrating to
a higher performance, lower cost,
standards-based infrastructure. In
addition, this grid infrastructure can
be used for a wide array of analytical
functions including trading systems,
fraud detection in retail banking and
credit cards, customer analytics and
segmentation, and more.
Continue Reading

viernes, 5 de octubre de 2012 | 0 comentarios

 Defective 'nasne' Recorder Gets Teardown Treatment

The release of the "nasne" recording server developed by Sony Computer Entertainment Inc (SCE) was suddenly postponed on the day before its planned release date. But, because the postponement was decided at the last minute, it was delivered to some users.

According to SCE, the release was postponed because they found that the nasne's HDD was partially broken. Even though it was shipped after quality checks at a manufacturing plant, an unexpected accident happened when it was being delivered, the company said. In fact, some of the "nasne" servers delivered to users did not operate because of their HDDs. To find out the cause of the failure, we obtained a nasne and started to break it down.

The teardown of the nasne turned out to be quite easy. When we took out four small covers located on the four corners of a side of the nasne, we found a screw under each of the covers. After removing the screws with a cross head screwdriver, we pried open the case by inserting a flat-blade screwdriver, fingernail, etc into a gap on the case.

The inside of the nasne was very simple. It contained only a large main board. The HDD mounted on the main board was printed with "HITACHI." It was the "Z5K500," a 500-Gbyte HDD manufactured by Hitachi Global Storage Technologies Inc (HGST). The thickness, size and rotation speed of the HDD are 7mm, 2.5 inches and 5,400rpm, respectively.

There was no component that cushions impacts applied to the HDD such as a shock-absorbing material. When seen from a side, it looked as if the HDD was floating in midair. Actually, the HDD was mounted on the main board via four small bases that were attached to the back side of the main board with screws.

Therefore, shocks applied to the main board will be certainly transmitted to the HDD. There was no shock-absorbing material used for the main board, either. So, if the nasne is hit or shaken, the vibration will probably be transmitted to the HDD via the main board.

Except for the HDD and the tuner module, the largest component was a semiconductor chip manufactured by Canada-based ViXS Systems Inc. It was the "XCODE 4210" media processor, which not only decodes video data encoded using MPEG 2, H.264, etc but also transcodes video data. Probably, the transcoding function is used when a TV program is recorded on the HDD in the 3x mode.

On the main board, there was also Toshiba Corp's "TC90532XBG" chip for demodulating terrestrial digital broadcasts (ISDB-T) and satellite digital broadcasts (ISDB-S). In addition, it was mounted with four "K4B1G1646G-BCH9" chips manufactured by Samsung Electronics Co Ltd. They seemed to be 1G-bit DDR3 SDRAM chips. Near the Ethernet port, there was the "RTL8211EG" Ethernet transceiver manufactured by Taiwan-based Realtek Semiconductor Corp.



Continue Reading

| 0 comentarios

It seems that new version of Asterisk arrived !

A.D.T (Asterisk Development Team) has announced new version of Asterisk, it will be 1.8.16.0
This release resolves a lot of issues reported by the community.
Thank you!
Some of them are:
* --- AST-2012-012: Resolve AMI User Unauthorized Shell Access through ExternalIVR
(Closes issue ASTERISK-20132. Reported by Zubair Ashraf of IBM X-Force Research)
* --- AST-2012-013: Resolve ACL rules being ignored during calls by some IAX2 peers
(Closes issue ASTERISK-20186. Reported by Alan Frisch)
* --- Handle extremely out of order RFC 2833 DTMF
(Closes issue ASTERISK-18404. Reported by Stephane Chazelas)
* --- Resolve severe memory leak in CEL logging modules.
(Closes issue AST-916. Reported by Thomas Arimont)
* --- Only re-create an SRTP session when needed; respond with correct crypto policy
(Issue ASTERISK-20194. Reported by Nicolo Mazzon)
For a full list of changes in this release, please see the ChangeLog.
Continue Reading

| 0 comentarios

What is Data Mining 2012

Suppose you wanted to optimize a cyclone furnace (an older-type design for burning coal, still in
use in many power plants) for stable high flame temperatures. Stable temperatures are necessary
to ensure cleaner combustion, and less build-up of undesirable slag that may interfere with heat
transfer. Typically, most power plants are equipped with very effective data gathering and
storage technologies, so there are easy ways to extract the data that describe various parameter
settings, as well as flame temperatures, on a minute-by-minute interval.
Traditional methods to approach this task – to optimize combustion to achieve stable flame
temperatures in the presence of different loads, fuel quality, and so on – come down to the
application of a-priori (CFD) models, or more or less trial-and-error parametric testing.


CFD (Computational Fluid Dynamics) modeling
One approach is to use explicit theoretical (first principles) models, to understand (based on
these usually complex and highly nonlinear models) how best to set certain parameters, distribute
airflows, etc. to optimize performance. With an explicit theoretical knowledge (model) of how
exactly various parameters affect flame temperatures, one can use standard computer
optimization algorithms to identify optima, which "in the laboratory" can be expected to
optimize for stable flame temperatures. 
Typically, these methods are used to identify the parameter "boundaries" where to keep certain
input parameters (controlled by operators, or closed loop control systems) to ensure stable
operations. However, in practice, there are numerous obstacles that put limitations on the
applicability, effectiveness, and usefulness of CFD methods to optimize furnace performance "in
vivo", i.e., inside a "real" power plant. 
First, theoretical, a-priori, physical models of furnaces will only model parameters that are
known (consistent with models) to have an influence. If in a particular installation, there are
other specific "noise factors" that effect performance, CFD will not "know about this", nor can
CFD models accommodate various esoteric installation details in a real power plant. 

StatSoft White Paper  July 2012
Second, CFD models can be very complex, and indeed become practically impossible to
optimize because of their complexity. 
So what is often needed is a "simplification" of sorts, or a "proxy-model" ("stand-in") that can
summarize how the parameter inputs such as over fired air (OFA) distribution, primary and
secondary air flows, coal-flow, and so on will affect flame temperatures, and the variability in
flame temperatures. Data mining methods can provide such "proxy models", as will be further
explained later.
Continue Reading

| 0 comentarios

 Why Data Warehousing?
The concept of data warehousing has evolved out of the need for easy access to
a structured store of quality data that can be used for decision making. It is
globally accepted that information is a very powerful asset that can provide
significant benefits to any organization and a competitive advantage in the
business world. Organizations have vast amounts of data but have found it
increasingly difficult to access it and make use of it. This is because it is in
many different formats, exists on many different platforms, and resides in many
different file and database structures developed by different vendors. Thus
organizations have had to write and maintain perhaps hundreds of programs
that are used to extract, prepare, and consolidate data for use by many different
applications for analysis and reporting. Also, decision makers often want to dig
deeper into the data once initial findings are made. This would typically require
modification of the extract programs or development of new ones. This process
is costly, inefficient, and very time consuming. Data warehousing offers a better
approach.
Data warehousing implements the process to access heterogeneous data
sources; clean, filter, and transform the data; and store the data in a structure
that is easy to access, understand, and use. The data is then used for query,
reporting, and data analysis. As such, the access, use, technology, and
performance requirements are completely different from those in a
transaction-oriented operational environment. The volume of data in data
warehousing can be very high, particularly when considering the requirements
for historical data analysis. Data analysis programs are often required to scan
vast amounts of that data, which could result in a negative impact on operational
applications, which are more performance sensitive. Therefore, there is a
requirement to separate the two environments to minimize conflicts and
degradation of performance in the operational environment.
Continue Reading
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. MY BLOG MELL - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger