Business Continuity & Disaster Recovery for iWoWSoft HRMS

Business Continuity & Disaster Recovery for iWoWSoft HRMS

Version: 3.3  Effective Date: November 2025
Applies To: iWoWSoft HRMS — SaaS, IaaS and On-Premises Deployments

1. Overview

iWoWSoft’s HRMS platform is designed with a strong focus on availability and data protection.
We host our production systems in a Tier III data centre in Cyberjaya and maintain multiple backup locations, including offline copies, to protect customer data against hardware failures, cyber incidents and catastrophic events.

Our objectives are to:

  • Maintain availability of the HRMS service as far as reasonably possible.
  • Restore service within defined Recovery Time Objectives (RTO).
  • Limit data loss within defined Recovery Point Objectives (RPO), including in worst-case scenarios.

This article describes our current Business Continuity (BC) and Disaster Recovery (DR) practices at a high level.


2. Scope

This BC/DR statement covers:

  • Systems
    • iWoWSoft HRMS production application (web and API).
    • Production databases storing customer HR data.
    • File/object storage used by the HRMS (e.g. attachments, documents).
    • Supporting infrastructure (networking, operating systems).
    • Monitoring, logging and alerting.
    • Customer support / ticketing tools.
  • Locations
    • Primary hosting environment: IPServerOne CJ1 Tier III data centre in Cyberjaya, Malaysia.
    • Office environment with an alternative server that can be used as a recovery environment if required.
    • Off-site / offline storage within our office.

3. Hosting & Data Centre

Our primary infrastructure is hosted with IPServerOne in its CJ1 data centre in Cyberjaya:

  • CJ1 is a Tier III facility designed for high availability, with redundant power and cooling and 24×7 operations.
  • IPServerOne’s Malaysian data centres and cloud infrastructure hold multiple certifications (such as ISO 27001, ISO 27017, PCI-DSS and SOC 2 Type II) at the provider / data centre level.

In addition to the data centre, iWoWSoft maintains an alternative server at our office which can be used as a recovery environment if the primary hardware in the data centre cannot be repaired or replaced within a reasonable timeframe.

Our hosting architecture may evolve over time (for example, hardware refreshes or additional sites) while maintaining equivalent or stronger security and availability controls.


4. Backups & Data Retention

To protect customer data, we maintain multiple copies of production database backups across different locations:

  • Daily database backups are stored:
    • On the production server; and
    • On NAS (Synology) storage within the data centre.
  • From the NAS, backup files are securely transferred to our office and then copied to offline media (offline / off-site copy).

Because backup files are large, transfer from the data centre to our office can take longer than one day. Our recovery planning and RPO explicitly take this into account.

Retention:

  • Daily backup copies are retained for at least 20 days.
  • In addition, we keep at least one monthly backup copy for longer-term reference (for example, investigating historical changes over past months).

Backups are access-controlled and, where supported by the underlying technologies, encrypted at rest.


5. Recovery Objectives (RTO & RPO)

We distinguish between:

  • Typical operational incidents – e.g. hardware failure, non-destructive software faults, upstream connectivity problems – where production data remains intact; and
  • Destructive incidents – e.g. severe storage failure or corruption – where we must restore from backup.

5.1 Typical Targets

Where the primary data centre and storage remain intact, our operational targets are:

ComponentTypical RTO (time to restore service)Typical RPO (maximum data loss)Notes
HRMS Application (Web & API)Up to 1 business day0 to 1 business dayMany incidents are resolved without backup restore (RPO ≈ 0). If backup restore is required, RPO is up to 1 day.
Production DatabaseUp to 1 business day0 to 1 business dayDaily backups stored in multiple locations.
File/Object Storage (attachments)Up to 1 business day0 to 1 business dayRestored from latest available backup set, only when necessary.

In many real-world cases – for example, a memory failure that requires hardware replacement, or an upstream network / CDN issue – we:

  • Do not restore from backup, because the production data is intact; and
  • Focus on restoring connectivity or replacing hardware so that customers can resume access to the current data.

In these situations, there is no data loss (RPO ≈ 0); there is only a temporary service interruption until the issue is resolved.

When we do need to restore from backup (for example, in case of unrecoverable corruption), our daily backup schedule means a maximum typical data loss of up to one day.

These are targets, not absolute guarantees; actual recovery time can vary depending on incident complexity (for example, diagnosis, hardware procurement, or third-party issues).

5.2 Worst-Case Scenario (Full Destructive Loss)

In an extremely unlikely scenario where:

  • A disaster or malicious activity wipes all online copies in the data centre; and
  • We must rely solely on offline / off-site copies stored in our office,

the maximum potential data loss (RPO) we communicate to customers is up to 7 days.

This conservative figure accounts for:

  • The time required to transfer very large backup files from the data centre to our office; and
  • Continued growth in backup size and transfer duration over time.

In such a scenario, we would:

  1. Rebuild infrastructure on new hardware (with the existing provider or an alternative).
  2. Restore from the most recent available offline / off-site backup copy.
  3. Work closely with affected customers to manage the impact and assist with any necessary data reconciliation.

6. Business Continuity

Our Business Continuity approach focuses on ensuring that we can continue to serve customers even when infrastructure or physical premises are affected.

Key measures include:

  • Alternative recovery server at office
    An alternative server is available in our office to be used as a recovery environment if data centre hardware cannot be repaired or replaced in a reasonable timeframe.
  • Remote-capable operations
    Our support and technical teams operate in a way that allows them to work remotely.
    Critical tools (ticketing/email, collaboration and monitoring) are cloud-based and accessible from multiple locations.
  • Vendor SLAs and operational assumptions
    We maintain SLAs with our hardware vendor targeting response within 4 hours.
    In practice, sourcing replacement hardware can sometimes take longer; our BC/DR strategy assumes that such delays are possible and provides alternative recovery paths (office recovery server and offline backups).

During significant disruptions, we prioritise:

  1. Restoring core HRMS functionality and access to customer data.
  2. Ensuring customers can continue essential HR operations.
  3. Restoring non-critical or batch features after core recovery is achieved.

7. Disaster Recovery Approach

7.1 General Principles

Our Disaster Recovery process is designed to protect both availability and data integrity:

  • We only restore from DR / offline backups when the primary data centre or primary data set is unavailable or compromised.
  • We avoid restoring from an older backup if the current production data is still intact, to prevent unnecessary data loss and complex reconciliation.

7.2 When We Do (and Do Not) Invoke DR

In practice:

  • If the application and database in the data centre are healthy and up to date, but there is:
    • A network or upstream provider issue (such as routing or CDN problems that prevent users from reaching the system); or
    • A hardware component failure (for example, waiting for replacement components while data on disk remains intact),

Restoring in these circumstances would:

  • Roll customers back to an earlier copy of their data (for example, up to one day earlier); and
  • Create significant data reconciliation issues once the primary environment is restored.

In such cases, our priorities are to:

  1. Preserve the current, accurate data in the primary environment.
  2. Work with our data centre and hardware vendors under their SLAs to restore service on the primary system as quickly as possible.
  3. Keep customers informed about status and progress throughout the incident.

We reserve full DR restoration (using offline / off-site backups, including office copies) for scenarios where:

  • The primary data centre cannot be recovered within a reasonable timeframe; or
  • The primary data set has been destroyed or irreversibly compromised (for example, a destructive attack or catastrophic data loss).

This approach balances continuity with data correctness. We prefer to temporarily delay access while protecting the current data rather than bring the system up quickly on a stale copy that could cause long-term inconsistency.

7.3 Typical DR Steps

When DR is invoked, the process typically involves:

  1. Incident assessment and DR decision (based on the principles above).
  2. Selection of recovery path, for example:
    • Repair / replace hardware in the data centre and restore from backups; or
    • Restore services to the office recovery server using the latest available backups.
  3. Restoring from a clean backup (online or offline / off-site, depending on the scenario).
  4. Application and data validation (smoke tests, key functional checks).
  5. Gradual return of customer traffic to the restored environment, with heightened monitoring.

8. BC/DR Testing

We conduct periodic BC/DR tests to validate our ability to restore services from backups and to improve our procedures over time.

  • We aim to perform at least one technical BC/DR test each year, typically in June.
  • Tests may include:
    • Technical restore exercises – for example, restoring database backups to an alternative environment and validating application functionality.
    • Tabletop exercises – scenario walk-throughs focusing on decision-making, roles and communication.

For each test, we maintain internal records covering:

  • Objectives and scope.
  • Scenario and steps performed.
  • Actual restoration times and observed RTO/RPO.
  • Issues found and follow-up actions.

A high-level summary of recent BC/DR tests can be provided to customers on request, subject to confidentiality considerations.


9. Incident Communication

For major outages or incidents that materially affect availability or data:

  • We notify affected customers via email, integrated with our ticketing system, so that:
    • Each affected customer has a dedicated ticket; and
    • All updates and closure information are tracked and auditable.

Responsibilities:

  • Our support team coordinates customer-facing communication (initial notices, updates and closure messages).
  • Our technical team leads technical investigation, recovery actions and post-incident analysis.

Customers can also contact our support team directly through the usual support channels if they have questions during or after an incident.


10. Data Centre Certifications (Hosting Provider)

Our hosting provider, IPServerOne, maintains a range of certifications and attestations for its Malaysian data centres and cloud infrastructure, including the CJ1 facility in Cyberjaya. These include, for relevant services:

  • Tier III data centre classification (concurrently maintainable).
  • ISO 27001 – Information Security Management System.
  • ISO 27017 – Information Security for Cloud Services.
  • PCI-DSS – Payment Card Industry Data Security Standard.
  • SOC 2 Type II – Service Organisation Control report for cloud and data centre infrastructure.

These certifications apply to the data centre and underlying infrastructure operated by IPServerOne.

iWoWSoft does not currently hold its own ISO 27001 or SOC 2 certification. Instead, we:

  • Build our HRMS platform on top of IPServerOne’s certified infrastructure; and
  • Implement our own application-level security, backup, access control and operational procedures as described in this Business Continuity & Disaster Recovery overview.

Copies or summaries of relevant data centre certificates can be provided to customers on request, subject to any conditions imposed by the provider.


11. Questions

If you have specific continuity or recovery requirements (for example, stricter RTO/RPO, customer-specific DR arrangements or integration with your own DR plans), please contact our support team or your account manager to discuss options.

    • Related Articles

    • Business Continuity & Disaster Recovery – FAQ

      Business Continuity & Disaster Recovery – FAQ Version: 1.0  Effective Date: November 2025 Applies To: iWoWSoft HRMS — SaaS, IaaS and On-Premises Deployments 1. Where is iWoWSoft HRMS hosted? Our HRMS production systems are hosted in IPServerOne’s CJ1 ...
    • Incident Response Plan for iWoWSoft HRMS

      Version: 1.0  Effective Date: November 2025 Applies To: iWoWSoft HRMS — SaaS, IaaS and On-Premises Deployments 1. Purpose and Scope This Incident Response Plan (IRP) describes how iWoWSoft handles security incidents related to the iWoWSoft HRMS ...
    • iWoWSoft PDPA Compliance Statement

      Version: 3.3  Effective Date: November 2025 Applies To: iWoWSoft HRMS — SaaS, IaaS and On-Premises Deployments 1. Introduction iWoWSoft Sdn. Bhd. (“iWoWSoft”, “we”, “our”, or “us”) is committed to protecting the privacy and security of all personal ...
    • How to determine whether SMTP setting is compliant to iWoWSoft requirement?

      Prerequisite: Have Microsoft Outlook 2010/2013 installed. Have SMTP protocol setting info for outgoing mail server. Info includes: Public Mail Server Address. It means it can be accessible from internet. SMTP (outgoing mail) Port Number Login Email ...
    • Recommended System Specification

      iWoWSoft HRMS is designed with fully web-based environment and thin client architecture. Most system operations will be processed on the server. It is hence important to ensure the server hardware meet the recommended specification. Numerous factors ...