(12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) | ||||||||||
| PCT |
|
|
| ||||||||||||||||||||||||||||||||||||||
For information on time limits for entry into the national phase please click here |
Published | |
-- | with international search report |
-- | before the expiration of the time limit for amending the claims and to be republished in the event of receipt of amendments |
![]() (57)
Abstract |
Methods and arrangements for providing efficient information transfer over a limited speed communications link TECHNOLOGICAL FIELD The invention concerns generally the field of transferring information over a limited speed communications link, which typically involves a wireless network connec- tion. Especially the invention aims at providing efficiency to such information transfer. Efficiency is construed to have a manifestation in relatively short delays observed by a user, as well as in the possibility of distributing the transmission re- sources of a communications link among a relatively large number of simultaneous connections.
BACKGROUND OF THE INVENTION
At the priority date of this patent application information transfer between com-
puters over wired or optically coupled networks has become a vital part of everyday
life. A vast majority of such information transfer takes place according to or at least
takes some partial advantage of various versions of the TCP/IP (Transmission Con-
trol Protocol/Internet Protocol), as is illustrated in fig. 1. In a simple case there are
two communicating parties, known as the client 101 and the server 102. The former
is the device or arrangement that a user utilizes to download and use information
and store it locally, while the latter is a kind of central data storage from which in-
formation is to be downloaded. Both the client 101 and the server 102 set up a so-
called protocol stack where information is exchanged in the conceptually vertical
direction. Layers that are on the same level in the client and the server constitute the
so-called peer entities that communicate with each other through all lower layers in
the protocol stacks. On top of the pairs of IP layers 103 and TCP layers 104 there
may be further higher layers. An example of a protocol layer that is widely used di-
rectly above the TCP layer is the HTTP (HyperText Transfer Protocol) that is meant
for the transmission of files written in markup languages like HTML (HyperText
Markup Language). Other examples of widely used higher protocol layers that in
the layer hiearchy are level with HTTP are FTP (File Transfer Protocol), Telnet and
SMTP (Simple Mail Transfer Protocol; not shown). The layers that are below the IP
layer 103 are not specifically shown in fig.
The basic model of two-party communications shown in fig.
Extending the techniques known from wired information transfer into a connection
that includes a wireless link, such as a radio connection between a base station and a
portable terminal, introduces new aspects that a user is likely to encounter in the
form of lower bit rates and increased delays. These are mainly related to two inter-
connected features of the wireless link in comparison with wired or optically
The TCP/IP-based technique of transferring information is not particularly well
suited for wireless information transfer. As an example we may consider a typical
case of web browsing, where HTTP is used over TCP/IP in a communication con-
nection that goes over a wireless link. Factors that cause the wireless link resources
to be wasted to a relatively great extent are:
- sending the bulk of the contents from web servers (HTML pages) as plain text ;
A solution that has become reality around the priority date of this patent application is WAP (Wireless Application Protocol). It is a bandwidth-optimized alternative to protocols that are not, such as the TCP/IP. Optimization involves measures like omitting redundant messages and replacing repeatedly occurring well-known tags with shortened codes. Optimization in this sense aims at the highest possible band- width efficiency, which itself has a definition as the ratio of the information trans- mission rate (the number of information bits transmitted per second) and the band- width in Hertz that is allocated for transmitting said information. Bandwidth effi- ciency measures (not surprisingly) how efficiently the communications system uses the available bandwidth.
The WAP protocol stacks are illustrated in fig. 2, where the actual WAP layers are, from top to down, the application layer 201 (WAE, Wireless Application Environ- ment), the session layer 202 (WSP, Wireless Session Protocol), the transaction layer 203 (WTP, Wireless Transaction Protocol), the security layer 204 (WTLS, Wireless Transport Layer Security Protocol) and the transport layer 205 (WDP, Wireless Datagram Protocol). In place of the WDP layer an UDP (User Datagram Protocol) layer must be used for IP bearers. Beneath the transport layer 205 there are further lower layers like IP 206 and physical layers 207; together these are also commonly referred to as the bearer layers.
WAP was developed to enable access to networked information from devices that have the typical features of wireless terminals, i. e. limited CPU (Central Processing Unit) capacity, limited memory size, battery-powered operation and simple user in- terface. The commercial applications of WAP that exist at the priority date of this patent application are the so-called WAP phones that combine the features of a mo- bile telephone with certain limited functionality of a web browser. In practice we have seen that the small number of WAP users compared to that of wired Internet users is a major drawback that reduces the commercial interest in setting up WAP services, especially at the advent of third generation digital cellular networks.
Moreover, already due to the introduction of the first phase of packet-switched cel-
lular data networks using GPRS (General Packet Radio Service) the need of wire-
less access for full-fledged mobile clients is expected to grow considerably. WAP is
As an example of the last-mentioned Fig. 3 illustrates the so-called MOWGLI con- cept (Mobile Office Workstations using GSM Links) where a mobile workstation 301 utilises a wireless link between a GSM (Global System for Mobile telecommu- nications) cellular phone 302 and a GSM base station 303 to communicate with what is called a mobile-connection host 304. In the mobile workstation 301 old ap- plications 311 and 312 communicate with agent programs 313,314 and 315 through an application programming interface known as the Mowgli Socket API 316. From the agents there is a connection to a Mowgli Data Channel Service API 317 and the associated Mowgli Data Channel Service 318 either directly or through a Mowgli Data Transfer Service API 319 and the associated Mowgli Data Transfer Service 320. New mobile applications 321 may have been designed to communicate directly (through the appropriate API) with either the Mowgli Data Transfer Service 320 or the Mowgli Data Channel Service 318. A control tool 322 is included for control- ling the operation of the other program components. A wireless interface block 323 constitutes the connection from the mobile workstation 301 to the cellular phone 302.
In the mobile-connection host 304 there are counterparts for certain entities imple-
mented in the mobile workstation 301: a wireless interface block 331, a Mowgli
Data Channel Service 332 with its associated API 333 as well as a Mowgli Data
Transfer Service 334 with its associated API 335. A number of proxies 336,337,
338 and 339 may act in the mobile-connection host 304 as counterparts to the agent
programs
The division of the MOWGLI architecture into Agent/Proxy-, Data Transfer-and
Data Transport layers is shown with heavy dotted lines in fig. 3. Of the protocols
used therein, between an agent and a proxy (like the master agent 314 at the mobile
workstation 301 and the master proxy
SUMMARY OF THE INVENTION As a consequence of the existence of a multitude of protocols described above, the problem of providing efficient and widely usable information transfer over a wire- less network connection remains unsolved at the priority date of this patent applica- tion. It is therefore an object of the invention to provide a method and an arrange- ment for providing efficient and widely usable information transfer over a wireless network connection. It is also an object of the invention to provide a method and an arrangement for enhancing the level of service that a human user of a wireless net- work connection experiences. It is a further object of the invention to provide ex- tendability to a wireless network link so that minimal software and hardware addi- tions would be needed to widen the selection of transmission protocols that can be used for wireless network access.
The objects of the invention are achieved by setting up a pair of functional entities that comprise protocol converters and additional modules. These functional entities are placed at each end of a wireless communication connection. They make a trans- parent conversion to a bandwidth-optimized wireless protocol for information to be transmitted wirelessly and a corresponding reverse conversion for information that has been received wirelessly.
The invention applies to a method the characteristic features of which are recited in the characterising portion of the independent patent claim directed to a method. The invention applies also to an arrangement the characteristic features of which are re- cited in the characterising portion of the independent patent claim directed to an ar- rangement. The characteristic features of other aspects of the invention are recited in the characterising portions of the respective independent patent claims.
The present invention has its roots in the contradictory situation where those proto-
cols that have been most widely used for transferring data over network connections
are not bandwidth-optimized while some other protocols that are were only devel-
oped for a narrow application area or have not gained too much popularity. In a
method and arrangement according to the invention there is a bandwidth-optimized
link portion located at that point of a wireless network connection that covers the
radio interface. Both the user's application at the client end of the connection and
At the priority date of this patent application such widely used network communia- tion protocol is typically TCP/IP, while the bandwidth-optimized protocol is typi- cally WAP.
Various advantageous features can be used in the additional modules to enhance the efficiency of the arrangement from that of a simple protocol conversion. Most ad- vantageously the client proxy, which is the device and/or process that is responsible for the connection and protocol conversions at the client side, has a cache memory where it stores a copy of data that has been requested and downloaded from the server side. If the same data is then needed again, it suffices to check whether there have been any changes to it since it was last downloaded. No changes means that a new downloading operation over the radio interface is completely avoided. Even if there have been changes it suffices to download only the changed data instead of the whole contents of a requested data entity. The client proxy may even have a back- ground connection to the server side over which changes and updates to recently downloaded and cached data are transmitted automatically in preparation for the po- tentially occurring need for an updated version at the client side.
A further development to the idea of simple caching is predictive caching, which
means that the client proxy follows either certain regularities in the behaviour of the
user or certain preprogrammed instructions (or both) and proactively downloads
data that the user is likely to need during a certain time period to come. Similarly as
in the difference caching referred to above, the predictive caching arrangement may
have a background connection to the server side so that while the user is doing
something else, the client proxy may prepare for the predicted need of certain data
by downloading it from the network. Predictive caching is easily combined with dif-
ference caching so that in preparation for the potential need of certain predictively
downloaded data the client proxy receives changes and updates to said data accord-
ing to certain updating rules.
An effective way of achieving efficiency is to enable the functional entities at the ends of the wireless communication connection to multiplex separate logical con- nections into one connection. The multiplexed logical connections may include representatives from both those established by the mobile client software and those related to the above-described caching processes.
BRIEF DESCRIPTION OF DRAWINGS The novel features which are considered as characteristic of the invention are set forth in particular in the appended claims. The invention itself, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best understood from the following description of spe- cific embodiments when read in connection with the accompanying drawings.
Fig.
Figs. 1,2 and 3 were described above within the description of prior art, so the fol-
lowing description of the invention and its advantageous embodiments will focus on
figs. 4 to 10.
DETAILED DESCRIPTION OF THE INVENTION The exemplary embodiments of the invention presented in this patent application are not to be interpreted to pose limitations to the applicability of the appended claims. The verb"to comprise"is used in this patent application as an open limita- tion that does not exclude the existence of also unrecited features. The features re- cited in depending claims are mutually freely combinable unless otherwise explic- itly stated.
Fig. 4 illustrates the principle of a method and an arrangement where a first com-
municating party 401, designated here as the client side, has a wireless network
connection to a second communicating party 402, which is here designated as the
server side. It should be easy to appreciate the fact that the designations"client"and
At the client side there is an application 411 that has been designed to use a first communications protocol stack, which is shown as 412. A characteristic feature of the first communications protocol stack is that it is not bandwidth-optimized and therefore not optimal for communications over a wireless link. For this reason there is at the client side a client-side proxy 413 where the first communications protocol stack 414 is linked, at a certain relatively high level, to a second communications protocol stack 415. Contrary to the first communications protocol stack the second communications protocol stack is bandwidth-optimized. The link between the stacks 414 and 415 constitutes another application layer 416 that should not be con- fused with the application 411 that is the actual user of information transferred over the network connection. Note that the invention does not place limitations to the number of physically separate devices that are used to set up the client side: it is possible to run the application 411 and the client-side proxy 413 in one and only device, while it is just as possible to have at least two physically separate devices to implement these functions.
On the server side 402 there is an access gateway 421 where the part that faces the
client side 401 comprises a peer entity stack 422 for the second communications
protocol stack 415 at the client side. On top of said peer entity stack 422 there is a
server side applications layer 423 from which there are couplings further into one or
In order to take full advantage of the invention, the client-side proxy 413 and the access gateway 421 should not be just unintelligent protocol converters. According to the invention the application layers 416 and 423 contain a set of modules, which can greatly improve the efficiency of information transfer.
Fig. 5 is a more detailed exemplary embodiment of the principle shown above in
fig. 4. Here the client side 401 consists of a client application 501 and a client proxy
Under the client proxy's application layer 512 on the other side there are, from top
to down, the WSP 521, WTP 522, WTLS 523, UDP 524 and
At the top in the layer hierarchy of the access gateway 551 there is another applica-
tion layer 552. Under it on the side facing the client proxy there are, from top to
One of the tasks of the client proxy 511 is to make it completely transparent to the client application 501 that the network connection comprises a stretch where a completely different communications protocol stack is used. This has the advanta- geous consequence that the client application 501 can be of an off-the-shelf type known already at the priority date of this patent application. Thus the wide selection of already available client-side application programs can be used without making changes. When one"looks"at the client proxy 511 from the viewpoint of the client application 501, the client proxy 511 appears to function exactly like a network server. Next we will analyse an advantageous implementation architecture for the client proxy that aims at fulfilling this task.
STRUCTURE AND OPERATION OF THE CLIENT PROXY
Fig. 6 illustrates a schematical structure of an advantageous embodiment of the ap-
plication layer 512 at the client proxy 511 of fig. 5. The left-hand side of fig. 6 illus-
trates the coupling (s) to the highest layer (s) of the first communications protocol
stack: HTTP 601 and FTP 602 are examples. On the right-hand side there is the
coupling to the second communications protocol stack, schematically shown as the
WAP stack 603. Note that depending on the first communications protocol stack the
coupling to the right from the application layer 512 does not need to concern only
the highest WAP layer, i. e. the WSP layer. For example, if the left-hand coupling is
with FTP the right-hand coupling can take place directly with the WTP layer, while
a left-hand coupling with an HTTP client necessitates a right-hand coupling with
the WSP layer.
The central functional module 604 in fig. 6 is called the CH or Connection Handler.
It has a bidirectional coupling with at least one PH (Protocol Handler); PHs 605 to
609 are shown in fig. 6. The CH 604 has also bidirectional couplings with a CCH
(Content Cache Handler) 610. The tasks of the CH 604 can be summarized as fol-
lows:
The CCH 610 is a local cache memory the primary task of which is to locally store certain information that according to a certain algorithm is likely to be requested by client applications. A simple strategy is to store previously requested information so that there is a maximum amount of memory allocated for caching : after the allo- cated memory becomes full, the content of the cache is discarded according to the chosen cache replacement policy. More elaborate strategies will be described in more details later.
Each PH 605-609 is associated with a client application protocol of its own. A PH
contains protocol-specific instructions about how should a request associated with
the appropriate client application protocol be processed before forwarding it over to
the second communications protocol stack (the WAP stack 603 in the example of
fig. 6). The PH is also equipped to perform the corresponding actions as well as
similar procedures that are to be applied to responses received from the WAP
stack). Examples of above said actions are the following:
- binary encoding of uplink commands, headers and all other standard elements for
all text based protocols, except when such binary encoding is already a part of the
appropriate client application protocol or second communications protocol, and cor-
responding decoding for downlink data
- binary encoding of uplink tags of markup languages like HTML or XML (ex-
tended markup language), except when such binary encoding is already a part of the
All PHs have couplings to the MU (Multiplexing Unit) 611, which has two main tasks: - multiplexing and demultiplexing of logical connections so that the number of separate connections to the access gateway is minimized, and - uplink connections prioritizing according to priorities set by the CH.
Fig. 7 illustrates the exchange of calls and messages within and between the entities shown in fig. 6 in an exemplary case where a client application requests information that is to be downloaded from the network. At step 701 there comes a request from a client application through a first protocol stack. At step 702 the CH receives the request and analyzes it in order to at least recognize, through which one of the pos- sibly several parallel first protocols it came.
At step 703 the CH sends an inquiry to the CCH in the hope that the requested in- formation would already exist there as a previously stored copy. In fig. 7 we assume that this is not the case, so the CCH responds at step 704 with a negative. Thereafter the CH uses its knowledge about the first protocol that was recognized at step 702 and calls the appropriate PH at step 705. The called PH processes the request at step 706 by performing operations, examples of which were given above, and forwards the request to the MU at step 707.
At step 708 the MU checks the priority of the request (priorities are assigned to every request by the CH). The MU communicates the processed request to the ap- propriate level in the second protocol stack at step 709.
Note that the processed request does not always go to the top layer of the second protocol stack: for example if the request came from the client application through HTTP, WSP must be used while an FTP request from the client application allows the WTP layer to be contacted directly.
Step 710 represents generally the phase in which the communication is carried out
with the access gateway through the second protocol stack. At step 711 there arrives
a response from the access gateway to the MU. The response must be subjected to
There may be an acknowledgement from the CCH at step 717. At step 718 the same information is forwarded towards the client application.
One efficient way for reducing overheads and performing bandwidth-optimization
is the multiplexing of connections at the MU. Multiplexing concerns both requests
from client applications and client proxy's internally originated requests, e. g. that
for"active"caching. Fig. 8 is a state diagram that illustrates the aspects of connec-
tion multiplexing. When there are no active communications connections to the ac-
cess gateway, the MU is in state 801. A request causes a transition to state 802,
where the MU sets up a connection to the access gateway. Successfully setting up
the connection is followed by an immediate transition to state 803, where the con-
nection is maintained. If, during the period when the connection to the access gate-
way is active, there comes another request for another connection, a transition to
state 804 occurs where the newly requested connection is added to a multiplexing
scheme that the MU uses to multiplex all separately requested first protocol connec-
tions to a single second protocol connection. After the addition step
STRUCTURE AND OPERATION OF THE ACCESS GATEWAY
The most important part of the access gateway regarding operation according to the
invention is the application layer 552 seen in fig. 5. The other parts of the access
gateway are basically just means for implementing protocol stacks that are known
As another alternative there is a coupling from the CH 902 to a WAP Applications Environment 905. We will describe later the meaning and importance of this alter- native. We may refer generally to the protocols on the right in fig. 9 as the network protocols, because they are used for communications into the direction of various (fixed) networks.
Similarly as in the application layer of the client proxy, there are an MU and various PHs (Protocol Handlers) that are specific to those network protocols through which the CH 902 wishes to be able to communicate. PHs 911 to 915 and the MU 931 are shown in fig. 9. Also the CCH concept is used in the access gateway, but instead of a single CCH there are multiple CCHs, four of which are shown as examples in fig.
9 as 921-924. The reasoning behind using several parallel CCHs is that there may be a separate CCH for each individual client proxy that has registered itself at a cer- tain access gateway, or group-specific CCHs each of which corresponds to a group of registered client proxies. The invention does not require several CCHs to be used at the access gateway, but having individual CCHs makes it easier to implement the (background) routines that aim at keeping the client proxies'CCHs up to date dur- ing active caching. Moreover, the difference between individual CCHs may be purely logical (they can all be stored in one physical database).
The application layer also contains a DNSP (Domain Name Server Proxy) 930, which is a local database that is used to map domain names to network addresses.
Its purpose is to improve efficiency by avoiding requests to a remote DNS, in case when locally available information is sufficient for name resolving.
Fig. 10, which consists of figs. 10a and 10b together, illustrates the handling of an
exemplary request 1001 that an access gateway receives from a client proxy through
the second protocol (WAP) stack. At step 1002 the MU recognizes the request pro-
tocol and calls a corresponding PH at step 1004 to process the request, i. e. to re-
verse the processing that was made at a PH of the client proxy that is the originator
of the request. The result of the processing 1005 at the PH is a processed request
that is forwarded to the CH at step 1006.
Note that the MU is able of receiving requests not only from the top layer of the
second protocol stack, but from inner layers, as well. For example, an FTP request
from the client application will be passed to the MU immediately from WTP layer,
but not via WSP layer. It should also be noted, that the MU is capable of recogniz-
ing"true"WAP requests. They are forwarded directly to the CH (shown by dashed
line as step
At step 1007 the CH recognizes the processed request and the protocol that is to be used to forward it to a content provider's server. The CH also monitors the priority of every particular request and performs actions, which guarantee that the priority of the corresponding response is the same, as for the request.
It would be most expedient if the requested contents could be read locally from a
CCH instead of forwarding the request to any other server through a network, so at
step
At step 1012 the CH transmits the processed request to the selected protocol (HTTP, FTP,...). Above said"true"WAP requests are directly forwarded to the WAE level without any further processing.
Step
In order to be able to send the response to the client proxy through the second pro- tocol (WAP) stack, the CH must call again the appropriate PH at step 1016. The se- lected PH processes the response at step 1017 and forwards it to the MU at step 1018.
At step 1019 the MU performs actions needed for prioritizing of connections, and
then communicates the processed response to the appropriate level in the second
protocol stack at step 1020. So the response is finally transmitted to the client proxy
It should be noted that the access gateway has by default a fixed high-capacity con- nection to at least one communications network, so exchanging information be- tween it and content sources at various locations throughout the world is much eas- ier and simpler and much less limited by bandwidth restrictions than transferring in- formation over the wireless link to the client proxy. Also memory space is less scarce a resource at the access gateway than at the client proxy. This underlines the significance of active caching at the access gateway. After the strategy has been chosen that is to be adhered to in active caching, the access gateway follows the changes of content in the selected content sources that are subject to active caching and updates regularly the client proxy specific (or client proxy group specific) cache databases. Keeping the latters at the access gateway as up to date as possible gives maximal freedom for choosing the time and way in which low-priority (back- ground) connections over the wireless link are established for updating correspond- ing cache databases at the client proxies.
PRIORITIZING CONNECTIONS Above it was explained how one of the advantageous functions of the client proxy is the multiplexing of several simultaneously active client application connections to a single wireless connection. As a first assumption each client application con- nection has equal priority regarding the multiplexing procedure, which means that every connection gets an equal share of the communications resources that the wire- less connection represents. However, it may be advantageous to define, and store in association with the CH, various levels of priority so that as a part of the multiplex- ing procedure the CH evaluates the urgency of each client application request; so that a larger proportional portion of the available communications resources will be allocated to those connections, that serve the most urgent requests.
Handling the background processes that serve active caching is typically an ultimate
example of prioritizing. The active caching requests from the CCH have so low pri-
ority that if there are any other requests that necessitate downloading information
over the wireless connection, these are served first before any capacity is allocated
to active caching.
The selection of priorities may follow a predetermined and fixed strategy that take into account e. g. the real-time nature of certain requests and the non-real-time na- ture of others. It is also possible to let a user set the priorities himself so that she can determine, what kind of connections should be served first.
CACHING STRATEGIES Next we will analyze certain ways of achieving savings in the number of time- critical requests that must be forwarded from the CH to the wireless network con- nection. A time-critical request is, in a wide sense, a request for such information the most prompt arrival of which would be appreciated by the requesting client ap- plication. For example if a human user wants to examine a web page on a display, the faster the correct page appears the better. This wide sense of time-criticality covers also narrow interpretations according to which only requests concerning real time data are time-critical. Achieving said savings means that a certain number of time-critical requests may be satisfied by reading information locally from the CCH.
It might be assumed that a substantial number of users of network connections have
a rather limited selection of frequented network locations. Additionally we may as-
sume that the content that can be downloaded from these"favourite"locations
changes only gradually.
Similarly any of said aspects as well as any combinations of them can be applied in maintaining a cache at the client proxy, or even a number of user-specific caches at the client proxy if there are multiple users using the same client proxy.
Difference caching means that the whole changed contents of a network location
subject to caching are not reloaded from the network but only those parts of it where
changes have occurred. Difference caching is mainly applicable in maintaining the
cache (s) at the client proxy. The cache (s) at the access gateway downloads contents
from the network in the way defined by the network server; at the priority date of
this patent application it is not regarded as possible to ask for only a difference be-
Active caching means that the CCH makes requests for downloading information from the access gateway without any client application directly requiring the infor- mation. An example of use of active caching can be a situation where the user specifies in the client proxy that he wants the latest version of a certain network re- source to be always available on the client device at once.
Predictive caching differs from active caching in that the CCH additionally tries to predict at least a part of the network locations from which contents should be downloaded and forwarded to client applications in the near future. Predicting may be based on a variety of factors, such as observed frequency of use (the contents of a certain network location have been requested at least N times, where N is a posi- tive integer), observed regularity in use (the contents of a certain network location have been requested at certain regularly repeated times), observed focus of interest (the relative portion of requests that concern contents of a certain type is large) and observed following of trends (the requests follow generally documented popularity statistics; the user wants to always check the entries on a latest"best of the web" list). The CCH implements predictive caching by either just storing the latest re- quested copy of the contents which it assumes to be requested again, or by regularly updating the predictively cached contents so that if and when a new request comes regarding a certain network location, the version that can be read directly from the CCH is never older than the length of the updating interval. Regular updating is most advantageously used when the predictions are based on observed regularity in use, so that the contents stored in the CCH are updated when the next occurrence of the observed regular use is approaching.
Generally caching could also be divided into two basic types: push and pull. In the
present invention push type caching is a procedure where the CCH has subscribed
as an active caching client of the access gateway regarding certain
It is common to all active-and predictive caching requests that these are most advantageously performed as background processes so that the user does not need to know that they are taking place. Similarly they are most advantageously performed at times when ample bandwidth is available, e. g. during the night or when the communications capabilities of the client side are not currently used for something else like an ongoing telephone call.
The mobility of the client side also suggests that active-and predictive caching should take into account the current location of the user: e. g. when the user is at a location where a local high-speed network connection is available, cache can be up- dated. Another example is when a user is entering a cell that covers a shopping mall; in this case information regarding the products and services available in the mall could be predictively stored into the CCH.
Enhanced caching also implies that means are available for optimizing said caching strategies, in terms of e. g. communication connection cost or communication traffic volume. It is considered to be within the capability of a person skilled in the art to present methods of optimization once the prerequisites like taxing policies are known.
Caching may also follow a number of strategies regarding the grouping of clients
and client proxies, or even individual users, to their respective caches. The most
elaborate grouping alternative is to have a personalized cache and its corresponding
caching strategy for each individual, identified user so that even if a number of us-
ers appear to use the same client proxy, they all have their personalized"cache ac-
counts"at the access gateway. A slightly less complicated approach delegates the
task of user-specific caching to the client proxy (which then in turn must have sev-
eral logically separate cache memories instead of the single one referred to above)
so that the access gateway only keeps a logically separate cache for each client
In all grouping alternatives except the simplest (trivial) one the search for certain requested contents may follow an expansive pattern: for example if user-specific caches have been defined but the cache reserved for a certain user does not contain the requested contents, the caches reserved for other users of the same client proxy are searched next, taking user access rights into account. If still no match is found, the search proceeds to the caches of other client groups and so on. The expansive searching strategy can be used in a smaller scale in the cache of the client proxy, if user-specific caches have been defined therein.
COUPLING OTHER KINDS OF (WAP) WIRELESS CLIENTS TO THE SAME ACCESS GATEWAY In the description above we have assumed that the client side that is to communicate with an access gateway has the overall structure shown in figs. 4 and 5, i. e. that there is a client side application that is designed for a non-bandwidth-optimized pro- tocol stack and necessitates a protocol conversion according to the invention. How- ever, it should be noted that the invention has an inherent advantage regarding ser- vice that can be given also to other kinds of clients. At the priority date of the inven- tion there are numerous users of WAP phones and other WAP-based portable ter- minals: because the access gateway uses WAP as the protocol that constitutes the interface towards wireless terminals, it is actually irrelevant whether the client is of the type shown in figs. 4 and 5 or just a conventional"true"WAP client.
Extending the service of the access gateway to true WAP clients is the reason for having a direct coupling from the CH 902 to a WAE entity 905. If the request from the client side turns out to relate to true WAP functionality, the CH 902 just for- wards it to the WAE entity 905 that takes care of the rest of the processing accord- ing to the known and documented practices of WAP.
Another feature of multi-service readiness is related to the fact that in the previous
description we left it completely open what constitutes the lower layers 526 and 558
RATE ADAPTATION AND PORTAL
Preparing or optimizing certain contents according to the capabilities of a user end device and/or according to link characteristics is often referred to as portal function- ality. The access gateway may have this functionality as its own feature for example in one or several PHs, or it may communicate with a certain specific external portal device that is capable of producing optimized content pages. Requests from the ac- cess gateway to such an external portal device should indicate the requested amount and type of portal functionality, for example by citing the current wireless link data rate, referring to maximal or optimal content page sizes, or announcing the type of client side device that is to use the requested content page.
If the access gateway has obtained an identification of a user that is behind a certain
request, it may apply portal functionality according to certain selections that the
user has explicitly made previously. The access gateway may also automatically
adapt its portal functionality according to certain identified user activity or user
FURTHER CONSIDERATIONS Because of the typically wireless nature of the limited speed communications link, the client proxy can never be sure beforehand without trying, whether the commu- nications connection to the acees gateway is functioning properly or not. In addition to link failures there is also the uncertainty factor of the access gateway or a certain content-providing server possibly being temporarily in a state where it is simply not responding. The client proxy may be arranged to send occasional queries to the ac- cess gateway just to make sure that an active communication connection can be set up quickly if there comes a request that necessitated downloading of information from the access gateway.
The priorization of connections over the limited speed communications link may be made dependent on certain telltale observations made at the client side. For exam- ple, if a browser program that a user uses to utilize network connections is run in a window, and this window is closed or reduced in size or otherwise found to be inac- tive, priorities of all connections that relate to that particular browser can automati- cally be lowered. Correspondingly observed increase in user activity (e. g. a reduced window being expanded again) may automatically cause priorities to be raised in order to prepare for potentially coming urgent requests.
The bandwidth-optimized protocol stack may be used for encrypting, authentication
and/or similar add-on functions that add value to a communications connection. The
invention allows encryption
In a situation where a client proxy has the possibility of communicating with several
access gateways simultaneously, the selection of the access gateway that would
most probably offer the service that matches best the needs of the user of the client
It may also happen that there are no access gateways available at all. Since a user would most probably appreciate even a slow wireless connection more than no wireless connection at all, the client side may go into a"non-AG"mode where the requests from client applications according to the first, non-bandwidth-optimized protocol stack are passed transparently on over the wireless connection to a the server application and responses are received from said server application as well according to the non-bandwidth-optimized protocol stack and passed on transpar- ently to the appropriate client applications; in both actions the access gateway is by- passed. During the"non-AG"mode the client proxy should be regularly polling for active, available access gateways so that once one becomes available, bandwidth- optimized operation is resumed.
Claims
1. An arrangement for transferring digital data over a limited speed communia-
tions link, comprising:
2. An arrangement according to claim 1, characterised in that:
3. An arrangement according to claim 2, characterised in that
- the application layer entity (416,512) of the client proxy (511) comprises :
4. An arrangement according to claim 3, characterised in that:
5. An arrangement according to claim 3, characterised in that it comprises
6. An arrangement according to claim
7. A client side arrangement (401) for transferring digital data over a limited
speed communications link in communication with a server side arrangement (402)
located at a different side of the limited speed communications link than the client
side arrangement (401), the client side arrangement (401) comprising:
8. A client side arrangement
9. A client side arrangement according to claim 8, characterised in that the ap-
plication layer entity (416,512) of the client proxy
11. A client side arrangement according to claim 9, characterised in that - the application layer entity (416,512) of the client proxy (511) comprises multi- plexing means (611) for multiplexing several logically separate connections be- tween the client proxy (511) and the server side arrangement (402) into a single connection therebetween.
12. A client side arrangement according to claim 11, characterised in that - the client proxy (511) is arranged to assign priorities to logically separate connec- tions multiplexed by said multiplexing means (611), said priorities being based on at least one of the following: observed user activity, explicitly stated preferences of a user, contents for the transferring of which the connections are used.
13. A client side arrangement according to claim 9, characterised in that
14. A client side arrangement according to claim 9, characterised in that in using
said cache memory (610) said client proxy (511) is arranged to apply at least one of:
- an active caching algorithm designed for proactively preparing for a future need of
certain digital data by the client application
15. A client side arrangement according to claim 7, characterised in that the first communications protocol stack is a combination of HTTP over TCP/IP and the sec- ond communications protocol stack is WAP.
16. A client side arrangement according to claim 7, characterised in that the cli-
ent application (411,502) and said client proxy (511) are arranged to operate within
a single physical device.
17. A client side arrangement according to claim 7, characterised in that the cli- ent application (411,502) and said client proxy (511) are arranged to operate within at least two physically separate devices.
18. A client side arrangement according to claim 7, characterised in that
19. A client side arrangement according to claim 18, characterised in that during a term of conveying transferred digital data between the client application (411, 502) and said server application (426,572) at the server side arrangement (402) ac- cording to the first communications protocol stack, said client proxy (511) is ar- ranged to repeatedly attempt resuming a communications connection according to said second communications protocol stack with an access gateway (551) at the server side arrangement (402).
20. A server side arrangement (402) for transferring digital data over a limited
speed communications link in communication with a client side arrangement (401)
located at a different side of the limited speed communications link than the server
side arrangement (402), the server side arrangement (402) comprising:
21. A server side arrangement according to claim 20, characterised in that:
22. A server side arrangement according to claim 21, characterised in that the
application layer entity (423,552) of the access gateway (551) comprises:
23. A server side arrangement according to claim 22, characterised in that:
24. A server side arrangement according to claim 22, characterised in that:
25. A server side arrangement according to claim 22, characterised in that
26. A server side arrangement according to claim 22, characterised in that in us-
ing said at least one cache memory (921,922,923,924) said access gateway (551)
is arranged to apply at least one of:
- an active caching algorithm designed for proactively preparing for a future need of
certain digital data by the client side arrangement (401),
- a predictive caching algorithm designed for predicting certain network locations
from which contents should be downloaded and forwarded to the client side ar-
rangement (401) at some future moment of time,
27. A server side arrangement according to claim 22, characterised in that the
application layer entity (423,552) of the access gateway (551) comprises a coupling
between said connection handler (902) and an application layer protocol entity
(905) of the second communications protocol stack.
28. A server side arrangement according to claim 22, characterised in that the server application (426,572) and said access gateway (551) are arranged to operate within a single physical device.
30. A method for transferring digital data over a limited speed communications
link where a client side subarrangement (401) and a server side subarrangement
(402) are located at different sides of the limited speed communications link, and
within the client side subarrangement (401) a client application (411,502) is ar-
ranged to receive and transmit digital data using a first communications protocol
stack (412,503,504,505) and within the server side subarrangement (402) a server
application (426,572) is arranged to receive and transmit digital data using the first
communications protocol stack (425,573,574,575);
characterised in that the method comprises the steps of:
- conveying transferred digital data between the client application (411,502) and
the server application (426,572) through a client proxy (511) within the client side
subarrangement (401) and an access gateway (551) within the server side subar-
rangement (402),
- performing in said client proxy (511) protocol conversions between the first com-
munications protocol stack (412,503,504,505) and a second communications pro-
tocol stack (415,521,522,523,524,525,526) that corresponds to a bandwidth ef-
ficiency that is better than a bandwidth efficiency to which the first communications
protocol stack corresponds, and
- performing in said access gateway
32. A method according to claim 31, characterised in that step b) comprises the
substeps of:
33. A method according to claim 32, characterised in that it additionally com-
prises the steps of:
- handling said cache memory (610) as a number of logically separate cache memo-
ries on the basis of at least one of the following: user, user group, type of informa-
tion stored,
- conducting at the client proxy (511) an expansive search through logically sepa-
rate cache memories in order to look for certain requested digital data from other
logically separate cache memories if that requested digital data was not found in a
particular one of said logically separate cache memories, and
34. A method according to claim 32, characterised in that it additionally com- prises at least one of the steps of: - actively updating digital data stored in said cache memory (610) by applying an active caching algorithm designed for proactively preparing for a future need of cer- tain digital data by the client application (411,502), - predictively updating digital data stored in said cache memory (610) by applying a predictive caching algorithm designed for predicting certain network locations from which contents should be downloaded and forwarded to the client application (411, 502) at some future moment of time, - partially updating digital data stored in said cache memory (610) by applying a difference caching algorithm designed for reloading from a network only those parts of previously stored contents where changes have occurred, - optimizing the process of downloading digital data from a network by applying an optimizing caching algorithm designed for optimizing, with respect to a limiting factor such as communication cost or amount of radio traffic, the amount and/or form of contents that should be downloaded from a network, and - processing digital data stored in said cache memory (610) by applying a process- ing algorithm designed to improve response time by processing digital data stored in said cache memory (610) prior to receiving an explicit request for such digital data from a client application (411,502).
35. A method according to claim 34, characterised in that in cases where at least
one of the active-, predictive-or difference caching algorithms is applied,
downloading from a network for the purposes of such applied algorithm is at least
partly implemented through a background communication connection to the server
side subarrangement (402), so that said background communication connection has
a different priority in resource allocation than potentially occurring simultaneous
service to requests from the client application
36. A method according to claim 34, characterised in that in cases where at least
one of the active-, predictive-or difference caching algorithms is applied,
downloading from a network for the purposes of such applied algorithm is at least
partly implemented through cost-optimized communication connections to the
server side subarrangement (402), so that communication connections for the pur-
37. A method according to claim 34, characterised in that in cases where at least one of the active-, predictive-or difference caching algorithms is applied, downloading from a network for the purposes of such applied algorithm is at least partly implemented through traffic-optimized communication connections to the server side subarrangement (402), so that communication connections for the pur- poses of active caching are made during terms when other communications traffic with the server side subarrangement (402) is lower than a certain normal value.
38. A method according to claim 32, characterised in that it additionally com-
prises the step of updating digital data stored in said cache memory (610) by gener-
ating and processing an internal request, forwarding the processed internal request
from the application layer entity (416,512) of the client proxy (511) downwards
through a second protocol stack (415,521,522,523,524,525,526) to the server
side subarrangement (402), receiving a response from the server side
39. A method according to claim 31, characterised in that step c) comprises the substep of selecting a particular access gateway among a number of available access gateways at a number of available server side subarrangements (402), the selection being based on at least one of the following: preconfigured information identifying a default access gateway, a dynamically obtained response to a query presented to a user, a dynamically obtained response to a query presented to a source within a network.
40. A method according to claim 31, characterised in that step c) comprises the
substeps of:
41. A method according to claim 31, characterised in that step c) comprises at
least one of the substeps of:
- authenticating the client proxy
42. A method according to claim 31, characterised in that step c) comprises the
substep of checking, whether other essentially simultaneous transmissions occur be-
tween the application layer entity (416,512) of the client proxy
43. A method according to claim 31, characterised in that step c) comprises the
substeps of:
cl) routing a request (1001) received at the access gateway
44. A method according to claim 43, characterised in that step c2) comprises the
substeps of:
c2/1) inquiring
45. A method according to claim 44, characterised in that it additionally com-
prises the steps of:
- handling said cache memory (921,922,923,924) as a number of logically sepa-
rate cache memories (921,922,923,924) on the basis of at least one of the follow-
ing: user, user group, client, client group, type of information stored,
- conducting at the access gateway (551) an expansive search through logically
separate cache memories (921,922,923,924) in order to look for certain requested
digital data from other logically separate cache memories if that requested digital
data was not found in a particular one of said logically separate cache memories,
and
46. A method according to claim 44, characterised in that it additionally com-
prises at least one of the steps of:
- actively updating digital data stored in said cache memory (921,922,923,924) by
applying an active caching algorithm designed for proactively preparing for a future
need of certain digital data by the client side subarrangement (401),
- predictively updating digital data stored in said cache memory (921,922,923,
924) by applying a predictive caching algorithm designed for predicting certain
47. A method according to claim 43, characterised in that the processing (1015, 1016,1017) of a response at the application layer entity (423,552) of the access gateway (551) as a part of step c3) comprises adapting certain contents of the re- sponse for transmission over a limited speed communication link to a client side su- barrangement, thus implementing portal functionality.
48. A method according to claim 47, characterised in that said adapting com- prises at least one of the following: adding information to the response, removing information from the response, replacing information in the response.
49. A method according to claim 47, characterised in that said adapting is made dynamically according to link conditions between the server side subarrangement (402) and the client side subarrangement (401).
50. A method according to claim 47, characterised in that said adapting is made according to certain previously obtained knowledge about certain capabilities of the client side subarrangement.
51. A method according to claim 47, characterised in that said adapting is made
according to certain previously obtained knowledge about certain explicitly ex-
pressed preferences of an identified user of the client side subarrangement.
52. A method according to claim 47, characterised in that said adapting is made according to certain previously obtained knowledge about certain implicitly re- vealed behaviour of an identified user of the client side subarrangement.