There's a familiar pattern in platform decisions at larger nonprofits. The Head of Programmes has a demo, likes what they see and starts imagining what it would mean for their team. Then the proposal goes to the IT/Systems/Digital team and the questions change entirely, often throwing a spanner in the works.
Your colleagues aren't being difficult. A Head of IT is paid to think about what might happen when things go wrong or when you need to transition to another system or when a regulator asks certain questions... or when a member of staff makes a mistake that needs to be unpicked and rectified. These are precisely the kind of questions to ask before committing to any platform. Asking them after you've gone live can become considerably more expensive.
This article sets out the technical due diligence questions we think every nonprofit should ask a potential platform. It explains why each question matters and describes what a good answer looks like. It's designed to be used as a checklist, let's dive in.
1. Where is our data hosted and in which country?
Why it matters: Data residency affects which legal framework governs your data and what rights you have if something goes wrong. For UK and European nonprofits, GDPR requires that data stored outside the UK or EU meets equivalent protection standards. For organisations working internationally, particularly those with US funders or stakeholders, data residency also affects HIPAA considerations. South African nonprofits have to adhere to POPIA, in Australia there's the Privacy Act; the list goes on.
What to ask:
- Which cloud provider do you use, and in which region or country is data stored?
- Is data replicated to other regions?
- What happens to data residency if you are acquired or change infrastructure providers?
What a good answer looks like: A clear, specific answer naming the cloud provider, the country of hosting, and the regulatory rationale for that choice. Vague answers like "the cloud" or "secure servers" are not sufficient. If the vendor cannot tell you where your data physically lives, that is a red flag.
2. What are your uptime guarantees and how do you handle maintenance?
Why it matters: Downtime affects programme delivery. If your staff cannot access the platform during a session, an intake appointment, or a reporting deadline, the consequences are real. Uptime guarantees are also an indicator of how seriously a vendor takes operational reliability.
What to ask:
- What is your contractual uptime guarantee?
- Do you have scheduled maintenance windows, and how much notice do you give?
- Where can we monitor platform status in real time?
- What is your incident response process when the platform goes down?
What a good answer looks like: A specific percentage (99.9% or above), a public status page you can check independently and a clear explanation of how updates are deployed. The best platforms release updates continuously without requiring downtime. If a software supplier cannot point you to a public status page, ask why not.
3. What security certifications or frameworks do you adhere to?
Why it matters: Security certifications like ISO 27001 or SOC 2 give you independent assurance that a supplier's security practices have been verified by a third party. In the absence of those certifications, you need to understand what framework the vendor does use and whether it is credible.
What to ask:
- Are you ISO 27001 certified or SOC 2 Type II certified?
- If not, what security framework do you adhere to and is there documentation we can review?
- Is certification planned, and on what timeline?
- How do you handle security vulnerabilities when they are discovered?
What a good answer looks like: Either a current certification with a verifiable certificate or a named and documented framework with evidence of adherence. If certification is on the roadmap, ask for a realistic timeline rather than accepting an indefinite commitment.
4. Is there an audit trail for data changes?
Why it matters: In regulated environments and in organisations that handle sensitive personal data, you'll need to demonstrate who changed what and when. Safeguarding teams and colleagues with compliance/legal responsibilities often require this. It also matters operationally: if a record is edited incorrectly, can you see what it said before?
What to ask:
- Is there a log of which user made changes to a contact record?
- Is the previous value of a field retained when it is overwritten?
- Are contact movements between projects or programmes logged?
- What is the retention period for audit logs?
What a good answer looks like: A clear distinction between activity logs (who did what, when) and version history (what the data said before it was changed). Many platforms have the former but not the latter. If version history is absent and it matters to your organisation, treat this as a firm requirement in any negotiation, not an afterthought.
5. How does the API work and what data can we access?
Why it matters: Most larger nonprofits will eventually want to connect their impact platform to other systems, for example a business intelligence tool (e.g. Tableau or Power BI), another CRM or perhaps a funder's in-house reporting portal. If the API is limited, undocumented or only exposes a subset of your data, integrations become expensive or impossible.
What to ask:
- Do you have a REST API, and is it publicly documented?
- Which data entities are accessible via the API; contacts, activities, survey responses?
- Are custom fields (fields we create ourselves) exposed through the API, or only standard fields?
- Are there rate limits or record limits on API calls and what are they?
- Are there additional charges for API usage?
What a good answer looks like: A link to actual documentation, not a promise that documentation exists. Confirmation that custom fields are accessible rather than you being limited to only access standard fields such as Name, Date Created, Created By. If you've gone to all the trouble of creating your own data framework, you want to be able to access all of that richness via the API. Any rate or record limits should be stated in numbers, not described as "reasonable" or "generous."
6. What mobile capability is available and does it work offline?
Why it matters: Many nonprofits deliver services in environments with unreliable connectivity such as community venues, rural settings and overseas contexts. If data can only be captured when online, you either lose data or need to create parallel paper-based system; neither of which is acceptable at scale.
What to ask:
- Is there a native mobile app, and for which platforms (iOS, Android)?
- Does the app support offline data capture with sync on reconnection?
- Are all features available offline, or only a subset?
What a good answer looks like: A specific answer about which platforms are supported and which features work offline. If a vendor says "yes, we have a mobile app" but cannot confirm offline capability, test it before signing. A browser-based platform is not the same as an offline-capable native app.
7. How is pricing structured and what triggers additional charges?
Why it matters: Platform pricing is rarely as simple as it first appears. Many vendors charge a base licence fee that looks manageable, with usage-based charges that can escalate significantly once you are live and generating real data volumes. Discovering these at invoice stage is unpleasant.
What to ask:
- What is included in the base licence fee?
- What usage thresholds trigger additional charges, and what are those charges?
- Are there charges related to number of contact records, survey responses, automation runs, or API calls?
- Are overage charges documented in the contract or Terms and Conditions?
What a good answer looks like: Specific thresholds and specific charges, in writing. A vendor who is reluctant to put usage-based charges in the contract is a vendor whose costs you cannot forecast. Require that all overage pricing appears in the Terms and Conditions before signing.
8. What can we export and what happens to our data if we leave?
Why it matters: Vendor lock-in is real. Some platforms make it technically difficult or commercially unattractive to leave by limiting what you can export. If your beneficiary data, survey history and outcome records are trapped in a proprietary format, your negotiating position at renewal is significantly weakened — and your ability to move platform is constrained.
What to ask:
- What data can we export, and in what format?
- Can we export all custom fields, or only standard fields?
- Can we export survey response data in a format that can be imported elsewhere?
- Is export available at any time, or only on request?
- Is there a charge for data export on departure?
What a good answer looks like: Unrestricted, self-service export in open formats (CSV is the minimum). Any platform that requires you to request your own data from the vendor, charges for it, or cannot export custom fields, is a platform that does not want you to be able to leave. That is worth knowing before you commit.
9. What BI tool connectivity is available?
Why it matters: Larger nonprofits increasingly run centralised intelligence dashboards that pull data from multiple systems into a single view. If your impact platform cannot feed into Power BI, Tableau or equivalent tools, you either manage a manual export process or accept that impact data sits in a silo.
What to ask:
- Do you have a native connector for Power BI or Tableau?
- If not, can we connect using the REST API and is there documentation for doing so?
- What does a typical API response look like; JSON, XML, other?
- Are there record limits on API calls that would affect bulk data extraction for BI purposes?
What a good answer looks like: Either a native connector, or clear and documented API access with a realistic pathway to building your own connector. If neither exists, the integration cost falls entirely on you. Factor that into the total cost of ownership.
10. How do you handle bugs, and what does resolution look like?
Why it matters: Every platform has bugs. What distinguishes good vendors from poor ones is not the absence of bugs but the quality of their response when bugs occur. A transparent, trackable resolution process protects your team and gives you evidence to present internally when things go wrong.
What to ask:
- How do we report a bug, is it via email, is there ticket system, can we talk to you over the phone?
- Can we track the status of reported bugs in real time?
- What are your SLAs for bug resolution by severity?
- Who owns our relationship from a support perspective?
What a good answer looks like: A named ticketing system with visible status tracking, clear severity tiers with associated response times, and a named point of contact. If a vendor's bug reporting process is "email us and we'll look into it," the resolution process will reflect that.
Using this checklist
Not every question on this list will matter equally to every organisation. An organisation with 20 staff in a single location has different requirements from one with 200 staff across five countries. But the discipline of asking these questions and requiring specific, documented answers is the same regardless of the scale you're working at.
The questions that vendors struggle to answer clearly are usually the ones most worth pressing on. A confident platform provider will welcome this level of scrutiny. The ones who deflect or over-promise are showing you something important about how the relationship will feel once you are a paying customer.
Makerble's own answers to these questions are available in our Technical FAQ for IT and Systems Leads. If you'd like to discuss your organisation's specific requirements, get in touch with our team.













.jpg)
.jpg)








.png)


.png)






.png)

%208.png)








.png)

