Home > New Deployments of the T-Engine > New Function Preview New Additions! T-Kernel 2.0

New Function Preview New Additions! T-Kernel 2.0

Preview of T-Kernel 2.0 New Functions

T-Engine Forum
Embedded Platform Committee
T-Kernel 2.0 Working Group

Steady Achievements of T-Kernel

T-Kernel, which was established along with T-Engine Project in 2002, has succeeded in significantly improving the development efficiency of software for large scale embedded devices and been adopted by many products based on ITRON specification OS with its more than 20 years of achievements at the time. It can be said that T-Kernel has established a certain position in the real-time operating system field for control of advanced embedded devices.

●Operating system that excels in control systems as well as information systems
Compared T-Kernel to ITRON specification OS, a device management function and a sub-system management function have been added to T-Kernel, and compatibility and reusability of device drivers and middleware have been enhanced. Furthermore, by using these functions, process-based programs similar to information system operating systems such as Linux, etc. can be also executed on T-Kernel.
The advantage of process-based use of T-Kernel is a significant increase in development efficiency due to the memory protection function between programs. Process-based use of T-Kernel is suitable for complicated and advanced information processing, and the affinity with Linux is high, so porting programs for Linux is also easy. On the other hand, in programs for control requiring real-time performance, the direct use of T-Kernel, rather than process-based use of T-Kernel, can realize the original high speed processing of real-time operating systems.
The main feature of T-Kernel is the inclusion of the characteristic of control system operating systems that conduct real-time processing at high speed together with the characteristic of information system operating systems that conduct complicated and advanced information processing. As such, this point meets the needs of recent advanced embedded devices.
For example, recent digital cameras require efficient execution of both complex and advanced information processing such as facial and scene recognition by image analysis and high level real-time control processing such as auto focus and image stabilization. If Linux is used here, image analysis programs may be easily developed; however, high level real-time control is difficult. T-Kernel is strong at both processing, and programs developed on Linux for image analysis can be ported.
In other embedded devices including printers and car navigation systems as well, the number of devices that require both complex advanced information processing and real-time control is increasing. Many of these products that have adopted T-Kernel belong to types of products that have been exactly the target of T-Kernel development since the beginning. At the same time, the above fact verifies the appropriateness of the basic design policy, software structure, and system call specification and finally the implementation of T-Kernel.

● To the second stage of "100-year software"
In T-Engine Project, robust infrastructure of software built around T-Kernel is described as "100-year software" and continuous use over a long period of time has been announced. The concept remains unchanged even now. We will firmly maintain the existing T-Kernel software structure and specifications.
However, 8 years have passed since the birth of T-Kernel and changes in the environment associated with the advancement of performance and functionality or the larger storage capacity during this time due to the progress of semiconductor technology and device technology are extremely great. Therefore, some functions and the scope of data for processing need to be added.
In response to such requests, "T-Kernel 2.0" will be revealed as a new real-time Kernel specification upgraded from the first generation T-Kernel (T-Kernel 1.0) specification. T-Engine Project will strive for new deployments toward the second stage with T-Kernel 2.0.

Added Functions of T-Kernel 2.0

Now, let us explain the newly added functions in T-Kernel 2.0 by dividing them roughly into four parts.

1. Significant enhancement of timer related functions
Since T-Kernel is an operating system for real-time control, functions that conduct time of day and hour dependent processing are important. Therefore, even the existing T-Kernel is equipped with extensive functions to control time such as cyclic handler and alarm handler functions that have processing programs defined by the application run at specified times of the day, a function that has tasks wait for a specified period of time, and a timeout function that specifies the upper limit of time until wait release.
However, in these T-Kernel 1.0 functions, the time of day and hours are specified with millisecond (one thousandth of a second) resolution and resolution less than a millisecond cannot be specified directly Note 1). This has been the specification since the development of the first generation ITRON specification OS in the mid-80′s. Although ITRON specification OS was upgraded several times later, from the viewpoint of compatibility, specifications with millisecond resolution have been passed down as they are and this point remains unchanged even in T-Kernel 1.0.
On the other hand, if we look at recent high-end microcomputers, the performance is significantly enhanced and CPU clocks range from several hundred MHz to over 1 GHz. When ITRON and T-Kernel system calls are executed in these microcomputers, it is not uncommon for the execution time and task dispatch time to be less than one microsecond although times vary depending on conditions. One microsecond is one thousandth of one millisecond. In other words, despite the fact that CPUs and operating systems can perform a lot in much smaller unit of time than one millisecond, this could not be fully taken advantage of by the system call specification of the operating system.
In order to resolve this issue, two classes of functions have been added to T-Kernel 2.0. One class of functions is the system calls that can be specified with microsecond resolution for the various functions related to time management. The other class of functions is a newly added physical timer functions.

● Introduction of an API that can process with microsecond resolution
The data size (INT type size representing integer) handled by T-Kernel is basically 32 bits and the same 32 bits (RELTIM and TMO data types) is used for the cyclic processing interval and relative time specification as well as the timeout time specification.
When expressing the time with a 32-bit signed integer with millisecond resolution, as two to the 31st power is approximately two billion, at the maximum, time for approximately 24 days can be expressed (two billion milliseconds (= two million seconds) is converted into approximately 24 days). This is most likely sufficient as a specification range for the cyclic processing interval and timeout time in general applications.
However, in the same 32 bits, if the unit of time is microsecond, the range of time that can be expressed is 1/1000 of the case with millisecond resolution, in other words, just 35 minutes. Then, processing that waits for one hour cannot be executed with an API Note 2), and even general use is affected.
Therefore, in T-Kernel 2.0 this issue has been resolved by the introduction of 64-bit data types. As a result, even if the unit is microsecond, time of 35 minutes x (two to the 32nd power) can be specified with 64 bits thereby completely resolving the issue regarding range of time that can be expressed.
If you have been involved in C language for a long time, integers are 16 bits or 32 bits and you may find it surprising that 64 bits can be handled as one integer instead of one structure. However, the long long type which is a 64-bit data type has been officially specified as C language standard (C99) for more than 10 years, and a support of 64-bit integers such as the compiler has been sufficiently established. There are almost no cases where 64-bit integers cannot be used in compilers including gcc normally used with T-Kernel; therefore, we decided to proactively use 64-bit data types. The typical use is the time specification with microsecond resolution.
In T-Kernel 2.0, data types described in List 1 are added in relation to 64-bit and microseconds. While data types that mean microsecond (μsec) have a "_u" (u means μ) or "_U" at the end, other data types that mean 64-bit integer have "_d" (d means double integer) or "_D" at the end.

List 1. Main Data Types Added to T-Kernel 2.0

typedef signed long long D; /* signed 64-bit integer */
typedef unsigned long long UD; /* unsigned 64-bit integer */
typedef long long VD /* various type 64-bit data */
typedef volatile D _D; /* volatile declaration */
typedef volatile UD _UD; /* volatile declaration */

typedef D TMO_U; /* 64-bit microsecond timeout */
typedef UD RELTIM_U; /* 64-bit microsecond relative time */
typedef D SYSTIM_U; /* 64-bit microsecond system time */

In this way, specification with microsecond resolution is possible in T-Kernel 2.0; however 32-bit millisecond APIs that were used in T-Kernel 1.0 remain as are, since many applications do not require 64-bit microsecond specification and from the viewpoint of compatibility with T-Kernel 1.0.
After all, APIs with the specification in 64-bit microsecond unit have been added under separate names in T-Kernel 2.0. Specifically, name of 64-bit microsecond APIs are expressed by adding "_u" (u refers to μ) to the end of API names of T-Kernel 1.0. In addition, for parameters which become 64-bit microsecond units, "_u" has been added to the end of the parameter name as well.
For example, the API which refers to start of alarm handler is tk_sta_alm() in T-Kernel 1.0. It is tk_sta_alm_u() in T-Kernel 2.0 (Table 1 and Figure 1).

Table 1. Example of 64-bit Microsecond API

APIs for Conducting the "Start Alarm Handler"

T-Kernel 1.0

tk_sta_alm( ID almid, RELTIM almtim );
Startup time almtim is specified in 32-bit milliseconds.

T-Kernel 2.0

tk_sta_alm( ID almid, RELTIM almtim );
Startup time almtim is specified in 32-bit milliseconds.

tk_sta_alm_u( ID almid, RELTIM_U almtim_u );
Startup time almtim_u is specified in 64-bit microseconds.



Figure 1. Alarm Handler Specifying Time with Microsecond Resolution

Specification in 64-bit microseconds is possible for the waiting time specification in delay task (tk_dly_tsk()) and the timeout specification in the API that is about to enter to wait in addition to the time specification in time management functions such as the cyclic handler and alarm handler. Moreover, an API that returns 64-bit microsecond information in reference functions of task status and handler status including time information has been added. A list of APIs to be added in T-Kernel 2.0 for processing with microsecond resolution is displayed in Table 2.

Table 2. List of T-Kernel 2.0 APIs that Conduct Processing with Microsecond Resolution

APIs for Specifying 64-bit Microsecond Timeout

tk_slp_tsk_u
tk_wai_tev_u
tk_wai_sem_u
tk_wai_flg_u
tk_rcv_mbx_u
tk_loc_mtx_u
tk_snd_mbf_u
tk_rcv_mbf_u
tk_cal_por_u
tk_acp_por_u
tk_get_mpf_u
tk_get_mpl_u
tk_rea_dev_du
tk_wri_dev_du
tk_wai_dev_u
MLockTmo_u

Putting Invoking Task to Sleep
Wait Task Event
Wait for Semaphore Resource
Wait Event Flag
Receive Message from Mailbox
Lock Mutex
Send Message to Message Buffer
Receive Message from Message Buffer
Call Port for Rendezvous
Accept Port for Rendezvous
Get Fixed-size Memory Block
Get Variable-size Memory Block
Start Reading a Device
Start Writing to a Device
Wait for Completion of Device Request
Lock High-speed Multi-lock

 

APIs for Specifying 64-bit Microsecond Time (Excluding Timeout)

tk_chg_slt_u
tk_dly_tsk_u

Change Task Slice-time
Delay Task

tk_set_tim_u
tk_cre_cyc_u
tk_sta_alm_u

Set Time
Create Cyclic Handler
Start Alarm Handler

 

APIs for Obtaining 64-bit Microsecond Time Information

tk_inf_tsk_u
tk_ref_tsk_u

Reference task statistics
Reference task status

tk_get_tim_u
tk_get_otm_u
tk_ref_cyc_u
tk_ref_alm_u

Get System Time
Get System Operating Time
Reference Cyclic Handler Status
Reference Alarm Handler Status

td_ref_tsk_u
td_inf_tsk_u

td_get_tim_u
td_get_otm_u

td_ref_cyc_u
td_ref_alm_u

Reference Task Status (dedicated debugging function)
Reference task statistics (dedicated debugging function)

Get System Time (dedicated debugging function)
Get System Operating Time (dedicated debugging function)
Reference Cyclic Handler Status (dedicated debugging function)
Reference Alarm Handler Status (dedicated debugging function)

Note that time dependent processing in T-Kernel such as cyclic handler and alarm handler startup and wait state release due to elapsed timeout time will be conducted in the system timer interrupt handling that runs in certain cycles. Therefore, even if time specified with microsecond resolution has elapsed, actual processing will not be executed until the next system timer interrupt occurs, and neither the cyclic handler nor alarm handler starts. In other words, the interrupt interval of system timer is the actual time resolution of time related processing in T-Kernel. The timer interrupt interval is set in system configuration information (defined in SYSCONF file, etc.), and the default value is 10 milliseconds in both T-Kernel 1.0 and T-Kernel 2.0. If the interval is set shorter, for example, if set at 100 microseconds, behavior in 100 microsecond time resolution will be possible. However, please note that system overhead will increase due to timer interrupt if the timer interrupt interval becomes shorter.

●Addition of physical timer functions
Functions described above enable specification in smaller time units than before due to genuine functional enhancement as an operating system. On the other hand, as another approach, there is a method to enhance time related functions by skillfully utilizing physical resources such as powerful hardware timers available in large numbers. This is the physical timer function added in T-Kernel 2.0.
In recent highly functional microcomputers, many of the peripheral input and output devices are now one chip and so-called SoC (System on Chip) is popular. Most functions placed on one chip include multiple (10 or more in some cases) hardware timers that can operate independently. Even if one timer is used as a system timer for T-Kernel, the many remaining timers can be used freely by users.
Therefore, in T-Kernel 2.0, APIs for setting these hardware timers, acquiring count values, defining interrupt handlers which start up when the specified time elapses are standardized and a newly added function called the "physical timer function" is provided. With this function, T-Kernel 2.0 aims to improve the development efficiency and portability of programs that operate hardware timers on SoC.
Physical timer behavior seen from T-Kernel users is the hardware counter where the count value monotonically increases from 0 by one for certain time interval. When the counter value reaches a specific value specified for each physical timer (upper limit), the handler (physical timer handler) specified for each physical timer starts up, and at the same time, the counter value returns to 0. If multiple physical timers are used, all timers operate independently and are identified with a physical timer number such as 1, 2, etc. (Figure 2). An API list of physical timer functions is shown in Table 3.

Figure 2   Start of Multiple Handlers by Multiple Physical Timers

Table 3   List of T-Kernel 2.0 Physical Timer Function APIs

StartPhysicalTimer
StopPhysicalTimer
GetPhysicalTimerCount
DefinePhysicalTimerHandler
GetPhysicalTimerConfig

Start Physical Timer
Stop Physical Timer
Get Physical Timer Count
Define Physical Timer Handler
Get Physical Timer Configuration Information

The behavior of physical timers and physical timer handlers are similar to cyclic handlers and alarm handlers at a glance, and there is some overlap in functions. However, since physical timer functions behave independently from the system timer of T-Kernel, physical timer functions are not under the influence of the system timer interrupt intervals and there is an advantage that overhead can be minimized. Even if a time not related to the interrupt interval of the system timer at all is specified, the physical timer can still handle processes very accurately and extra timer interrupt occurrence until handler startup is not necessary. On the other hand, when the cyclic handler and alarm handler are used, the interrupt interval of the system timer needs to be set sufficiently small in order to ensure accurate handler startup time, and overhead due to timer interrupts increases.
Conversely, the advantage of using cyclic handlers and alarm handlers is that degree of freedom is high since there are no restrictions on the number of handlers that can be defined. In other words, multiple time related functions (startup of cyclic handler and alarm handler, timeout, etc.) can be simultaneously realized by using only one system timer, and this part is properly processed with a program within T-Kernel. On the other hand, in case of physical timers, only one request can be processed at a time for one timer. For example, if there is a request to start physical timer handler A in 3,500 microseconds and there is a request to start physical timer handler B in 2,800 microseconds, the use of two physical timers is necessary for simply processing these two requests. In terms of providing hardware timer functions as they are, the level of abstraction of the physical timer function is low. The name "Physical Timer" derives from this fact. However, for SoC, where multiple hardware timers can be used, there is no problem when one physical timer cannot simultaneously process multiple requests, and the low level of abstraction can lead to small overhead.
As a result, characteristics of alarm handlers and cyclic handlers are different from those of physical timer handlers in the points above. Therefore, using suitable handlers in accordance with the hardware configuration and application needs is the best advice to developers. T-Kernel 2.0 has introduced both functions acknowledging the overlap in functions.

2. Support for Large Capacity Devices
The conventional T-Kernel is a 32-bit operating system, and, although 64 bits are used for some data such as system time, the API parameters and most of the data processed by the operating system is 32 bits.
However, because of the increase in speed and capacity of recent embedded devices, cases where 32 bits are practically insufficient have emerged. One such example is the case of specifying the time of day and hour in microseconds as explained in the previous section. The other example is the case of handling large capacity devices such as hard disks.
APIs of T-Kernel 1.0 express all block numbers of devices with 32-bit signed integers. This applies to hard disks as well, and the maximum value of the block number (sector number) is 2 to the 31st power minus 1 or approximately 2 billion. Therefore, the maximum value of hard disk capacity that can be handled by T-Kernel 1.0 is approximately 1T byte as a result of 512 bytes × 2 billion sectors when one sector is 512 bytes.
In other words, if you are going to handle a large capacity hard disk with T-Kernel 1.0 APIs for device management, you cannot directly access parts that exceed 1T byte. However, it is not unusual to find hard disks for PCs that exceed 1T byte these days and cases of handling 1T byte or larger hard disks for embedded devices will increase in the future.
Therefore, in T-Kernel 2.0, 64-bit data types were introduced in some of the device management API parameters. Specifically, in tk_rea_dev() and tk_wri_dev() that perform reading from devices and writing in devices, the start parameter to specify the start position of reading and writing (sector number in case of hard disk) was made 64 bits. (Figure 3)

Figure 3 Writing to a Large Capacity Hard Disk

However, in consideration of usage that does not require large capacity device support and compatibility with T-Kernel 1.0, the traditional APIs for 32 bits in T-Kernel 1.0 remain unchanged and APIs for which 64-bit specification is possible were added under separate names. This policy is the same as the one for time specification in microseconds.
The added APIs are distinguished by the addition of "_d" (d means double integer) at the end of API names of T-Kernel 1.0. For parameter names as well, "_d" is added to 64-bit parameter names at the end. start becomes start_d.
For example, the API that conducts writing in devices is tk_wri_dev() in T-Kernel 1.0, and it will be tk_wri_dev_du() with the addition of _du in T-Kernel 2.0. (Table 4). This contains both the "_u" part that indicates timeout was specified in microseconds with a 64-bit integer, and the "_d" part that indicates the specification of the start position of writing became 64-bit data.

Table 4 Example of a Device Management API Where the Start Position was Made 64 Bits

APIs for "Start Writing to a Device"

T-Kernel1.0

tk_wri_dev (ID dd, INT start, VP buf, INT size,TMO tmout)
Writing start position start is specified by 32 bits.
Timeout time tmout is specified in milliseconds with a 32-bit integer.

T-Kernel2.0

tk_wri_dev (ID dd, W start, VP buf, W size,TMO tmout)
Writing start position start is specified by 32 bits.
Timeout time tmout is specified in milliseconds with a 32-bit integer.
tk_wri_dev_du (ID dd, D start_d, VP buf, W size, TMO_U tmout_u)

Writing start position start_d is specified by 64 bits.
Timeout time tmout_u is specified in microseconds with a 64-bit integer.

In addition, the parameters of start and size that were INT type in T-Kernel 1.0 are changed to W type in T-Kernel 2.0 for the clarification of data size. However, INT type and W type are both 32 bits and there is no substantial change in the specification.
For device management functions, programs that actually conduct input and output with devices are not included in T-Kernel itself. Instead, these programs are provided separately as device drivers or users develop the programs on their own. Therefore, in order to actually conduct device input and output processing that support 64-bit data, upgrading T-Kernel to 2.0 as well as upgrading device drivers to 64-bit support version is necessary. For request packets as well when sending input and output requests from the device management function of T-Kernel to device drivers, a 64-bit version (T_DEVREQ_D type) request packet will be added in addition to the 32-bit version (T_DEVREQ type) used in T-Kernel 1.0.
Incidentally, in the T-Kernel 2.0 device management function, the specification of the start position is 64 bits; however, the data size (block number) that conducts reading and writing remains 32 bits. Moreover, the address specification of the input and output buffer, etc. is also 32 bits, which is the same as the general pointer type. We do not assume that the amount of data to be input and output all at once exceeds the size (specifically 2G bytes) that can be expressed by 32 bits. The reasons are as follows. Most memory address space is still 32 bits at this point and the handling of data which exceeds 2G bytes all at once not only in other functions of T-Kernel but the library, development environment, and other middleware, etc. is not realistic. The meaning of large capacity device support in T-Kernel 2.0 is that access to all areas (including the area where the sector number exceeds 2 to the 31st power) of a large capacity hard disk has become possible.

3. Enhancement of System Management Program Compatibility
T-Kernel 2.0 adds several APIs for functions that conduct control and management of the overall system such as cache control and memory access rights setting with MMU (Memory Management Unit) (Table 5). It is conceivable that opportunities for using these APIs from general applications will not be that frequent; however, use in programs that conduct management of overall systems and programs for debugging is assumed.

Table 5 List of T-Kernel 2.0 APIs Related to MMU and Cache

GetSpaceInfo
SetMemoryAccess
SetCacheMode
ControlCache

Get Various Address Space Information
Set Memory Access Right
Set Cache Mode
Control of Cache

For cache and MMU, appropriate settings have been set by the operating system in T-Kernel 1.0 also, and these functions were used there. In addition, some APIs for cache control, etc. were introduced as an implementation-dependent specification. In other words, individual support was possible even in the past. However, in order to enhance the compatibility of system management programs, etc. further, it was determined that APIs related to these functions would be included in the standard specification of T-Kernel 2.0.

4. Addition of Utility Functions
Some useful functions that are often used in the development of device drivers, middleware, applications, etc. on T-Kernel are added to the T-Kernel 2.0 specification. This is meant to aim for improvement in development efficiency and compatibility due to the expansion of standardization scope.
To be added is a function for exclusive control called fast lock/fast multi-lock (Table 6). Compared to exclusive control using conventional objects such as semaphores, the speed of processing when the system does not enter the wait state will be accelerated.

Table 6 List of T-Kernel 2.0 Fast Lock/Fast Multi-lock APIs

CreateLock
DeleteLock
Lock
Unlock

Create High-speed Lock
Delete High-speed Lock
Lock High-speed Lock
Unlock High-speed Lock

CreateMLock
DeleteMLock
MLock
MLockTmo
MLockTmo_u
MUnlock

Create High-speed Multi-lock
Delete High-speed Multi-lock
Lock High-speed Multi-lock
Lock High-speed Multi-lock (Timeout specification)
Lock High-speed Multi-lock (Microsecond timeout specification)
Unlock High-speed Multi-lock

This function was often provided as a library in conventional T-Kernel and drivers and middleware using this function have already been developed. In T-Kernel 2.0, the function provided as a library in the past are incorporated in the specification of T-Kernel itself.

Upward Compatibility with T-Kernel 1.0

The T-Kernel 2.0 specification has upward compatibility with the T-Kernel 1.0 specification. Therefore, while utilizing the past applications of T-Kernel, migration to a functionally enhanced Kernel can be conducted smoothly. Furthermore, not only is source level compatibility possible but binary compatibility is also possible.
For example, even if T-Kernel 1.0 is upgraded to T-Kernel 2.0, device drivers, middleware, and applications, etc. that operated in T-Kernel can operate without recompilation.
However, it is not that changing T-Kernel 1.0 used in existing embedded devices to T-Kernel 2.0 is recommended. If time specification in microseconds and support for large capacity devices are not necessary, there is no particular inconvenience in continuing the use of conventional T-Kernel 1.0.
Generally, T-Kernel 2.0 will be used during the development of new products or updates of the CPU and hardware of existing embedded devices with an eye on the future. We believe that conventional devices used with T-Kernel 1.0 and new devices used with T-Kernel 2.0 will coexist, and the latter will gradually increase over several years or more in the future.

New Deployments of T-Engine Project

Up to here, we have explained a technical outline and additional functions of T-Kernel 2.0. As stated earlier however, T-Kernel 2.0 is an operating system based on the 8-year deployment of T-Kernel 1.0 and functions were added in consideration of increasing performance, functions, and capacity of embedded devices while maintaining the concept of "100-year software" that has been advocated since the beginning of the project. Upward compatibility will be maintained for the conventional T-Kernel; therefore, we would like existing users to use T-Kernel 2.0 without worry.
For T-Kernel 2.0, in addition to the enhancement of the operating system itself from the technical aspect, new efforts are also being made for the purpose of improving ease of use such as the provision of the specification in XML format in consideration of electronic use and a distribution license with a traceability function using ucodes.
Furthermore, as the general T-Engine project, we carry out new activities aiming for the promotion of T-Kernel use including the development of the reference board on which T-Kernel 2.0 runs, and a one stop service where T-Engine Forum collectively provides T-Kernel and its associated software, etc.
We would appreciate it very much if you used these services together with T-Kernel 2.0 and continue using T-Kernel for the development of embedded devices.

Note 1) However, it does not mean that task dispatch and progress of operating system processing will be conducted in milliseconds. Task dispatch and operating system processing including system call execution are executed in real time by maximizing the performance of the CPU regardless of in milliseconds or in microseconds. In ITRON and T-Kernel 1.0 as well, for example, if you force periodic interrupt every microsecond or so by operating the hardware timer on your own and conduct processing in the interrupt handler, time-dependent processing is possible without limiting the unit of time to milliseconds. However, individual users may find such processing complicated, and since it is not recommended either from the aspects of standardization and compatibility, it was determined that timer related functions would be enhanced significantly in T-Kernel 2.0

Note 2) "API" is an abbreviation for "Application Program Interface." Here, API is used with almost the same meaning as "system call." However, there are functions provided by macros and libraries in part of T-Kernel/SM, and, strictly speaking, functions to call these functions are not called system calls. Therefore, "API" is used as a collective term of function specification to call each function of T-Kernel in a broader meaning than "system call."

Home > New Deployments of the T-Engine > New Function Preview New Additions! T-Kernel 2.0

Return to page top