Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683

- name: Shell script static analysis
run: shellcheck bin/fetch-configlet bin/verify-unity-version bin/check-unitybegin bin/run-tests format.sh
run: shellcheck bin/fetch-configlet bin/verify-unity-version bin/check-unitybegin format.sh

- name: Check concept exercises formatting
run: |
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ jobs:

steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Determine number of available hardware threads
run: echo "NUM_THREADS=$(nproc || sysctl -n hw.ncpu)" >> $GITHUB_ENV
- name: Test Exercises
env:
CC: ${{ matrix.compiler }}
run: ./bin/run-tests -a
run: make -j ${{ env.NUM_THREADS }}
150 changes: 0 additions & 150 deletions bin/run-tests

This file was deleted.

13 changes: 6 additions & 7 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,8 @@ The structure of en exercise directory is as follows (note the differing hyphen
These are both skipped by the `exercism` CLI when downloading to the client, so it is imperative that you do not reference the names of the files in your code.
If you need to provide a header file example that is necessary to run your tests it should be named `{my_exercise}.h` instead.
Please also use [include guards][] in your header files.
The exercise tests can be run using the [`bin/run-tests`][run-tests] script which will rename the `example.{c|h}` files accordingly.
The exercise tests can be run using `make` from the repository root.
The top-level makefile will rename the `example.{c|h}` files accordingly (use `make help` to learn about individual targets).
* `makefile` - is the makefile for the exercise as it would build using proper filenames (i.e. `{exercise_name}.c` and `{exercise_name}.h` instead of `example.c` and `example.h` respectively).
Makefiles are expected to change very little between exercises so it should be easy to copy one from another exercise.
* `README.md` - is the readme that relates to the exercise.
Expand Down Expand Up @@ -142,7 +143,7 @@ If you would like the [`/format`][format-workflow] automated action to work corr
* [Lychee link checker][lychee] action
* `configlet.yml` fetches the latest version of configlet from which it then runs the `lint` command on the track
* `format-code.yml` checks for the string `/format` within any comment on a PR, if it finds it then `format.sh` is run on the exercises and any resulting changes are committed. A deploy key is required for the commit to be able to re-trigger CI. The deploy key is administered by Exercism directly.
* `build.yml` runs the `./bin/run-tests` tool on all exercises
* `test.yml` runs `make` in the repository root to test all exercises

### The Tools

Expand All @@ -160,21 +161,20 @@ The work the tools in this directory perform is described as follows:
```

* `fetch-configlet` fetches the `configlet` tool from its [repository][configlet].
* `run-tests` loops through each exercise, prepares the exercise for building and then builds it using `make`, runs the unit tests and then checks it for memory leaks with AddressSanitizer.

### Run Tools Locally

You can also run individual tools on your own machine before committing.
Firstly make sure you have the necessary applications installed (such as `clang-format`, [`git`][git], [`sed`][sed], [`make`][make] and a C compiler), and then run the required tool from the repository root. For example:

```bash
~/git/c$ ./bin/run-tests
~/git/c$ make
```

If you'd like to run only some of the tests to check your work, you can specify them as arguments to the run-tests script.
If you'd like to run only some of the tests to check your work, you can specify them as targets to make.

```bash
~/git/c$ ./bin/run-tests -p -e acronym -e all-your-base -e allergies
~/git/c$ make acronym all-your-base allergies
```

## Test Runner
Expand Down Expand Up @@ -207,7 +207,6 @@ Read more about [test runners].
[versions]: ./VERSIONS.md
[test-file-layout]: ./C_STYLE_GUIDE.md#test-file-layout
[include guards]: https://en.wikipedia.org/wiki/Include_guard
[run-tests]: ../bin/run-tests
[configlet]: https://github.com/exercism/configlet
[configlet releases page]: https://github.com/exercism/configlet/releases
[hosted runners]: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners
Expand Down
129 changes: 129 additions & 0 deletions makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# This makefile creates a target to build and test each exercise using the provided example
# implementation. The exercise target depends on a list of other targets that
# 1) Copy the main test file and adjust it to include all tests,
# 2) Copy the example implementation so that it is used,
# 3) Copy the makefile and unittest framework,
# 4) Build and test the exercise.
#
# Use `make <slug>` to build and test a specific exercise. Simply running `make` builds and
# tests all available exercises.


# Macro to create the rules for one exercise.
# Arguments:
# $(1) - slug
# $(2) - slug with dashes replaced by underscores
# $(3) - type of exercise: 'practice' or 'concept'
# $(4) - name of test implementation: 'example' or 'exemplar'
define setup_exercise

# Copy the test file and removes TEST_IGNORE
build/exercises/$(3)/$(1)/test_$(2).c: exercises/$(3)/$(1)/test_$(2).c
@mkdir -p $$(dir $$@)
@sed 's#TEST_IGNORE();#// &#' $$< > $$@

# Copy example/exemplar implementation
build/exercises/$(3)/$(1)/$(2).c: exercises/$(3)/$(1)/.meta/$(4).c
@mkdir -p $$(dir $$@)
@cp $$< $$@

build/exercises/$(3)/$(1)/$(2).h: $$(wildcard exercises/$(3)/$(1)/.meta/$(4).h exercises/$(3)/$(1)/*.h)
@# Copy all .h files in the exercises directory
@cp exercises/$(3)/$(1)/*.h build/exercises/$(3)/$(1) || true
@# If an example.h/exemplar.h file exists in .meta, replace slug.h with that one
@if [ -e exercises/$(3)/$(1)/.meta/$(4).h ]; then \
cp exercises/$(3)/$(1)/.meta/$(4).h build/exercises/$(3)/$(1)/$(2).h; \
fi

# Copy Makefile
build/exercises/$(3)/$(1)/makefile: exercises/$(3)/$(1)/makefile
@mkdir -p $$(dir $$@)
@cp $$< $$@

# Copy the test framework
build/exercises/$(3)/$(1)/test-framework: $$(wildcard exercises/$(3)/$(1)/test-framework/*)
@mkdir -p $$@
@cp exercises/$(3)/$(1)/test-framework/* build/exercises/$(3)/$(1)/test-framework/

INPUT_FILE_TARGETS = \
build/exercises/$(3)/$(1)/test_$(2).c \
build/exercises/$(3)/$(1)/$(2).c \
build/exercises/$(3)/$(1)/$(2).h \
build/exercises/$(3)/$(1)/makefile \
build/exercises/$(3)/$(1)/test-framework

# Build the exercise.
build/exercises/$(3)/$(1)/tests.out: $$(INPUT_FILE_TARGETS)
$$(MAKE) -C build/exercises/$(3)/$(1) tests.out

# Run the exercise's test binary and create a stamp file if all tests pass
build/exercises/$(3)/$(1)/tests-passed.stamp: build/exercises/$(3)/$(1)/tests.out
@rm -f build/exercises/$(3)/$(1)/tests-passed.stamp
@build/exercises/$(3)/$(1)/tests.out && touch build/exercises/$(3)/$(1)/tests-passed.stamp

# Build and run the memcheck variant
# This is only a single target, since an exercise's makefile builds and runs memcheck as a single target
build/exercises/$(3)/$(1)/memcheck-passed.stamp: $$(INPUT_FILE_TARGETS)
@rm -f build/exercises/$(3)/$(1)/memcheck-passed.stamp
$$(MAKE) -C build/exercises/$(3)/$(1) memcheck && touch build/exercises/$(3)/$(1)/memcheck-passed.stamp

# Top-level target for an exercise. It depends on the stamp files for the actual tests and the memcheck
# run. Only when both stamp files are present will this target be considered "done", so as long as there
# are some errors, building that target will trigger a re-run of the tests or memcheck binary.
.PHONY: $(1)
$(1): build/exercises/$(3)/$(1)/tests-passed.stamp build/exercises/$(3)/$(1)/memcheck-passed.stamp

# Remove all artifacts of the exercise
.PHONY: $(1).clean
$(1).clean: $$(INPUT_FILE_TARGETS)
rm -rf build/exercises/$(3)/$(1)

endef

PRACTICE_EXERCISES := $(notdir $(wildcard exercises/practice/*))
CONCEPT_EXERCISES := $(notdir $(wildcard exercises/concept/*))

all: practice concept

.PHONY: practice
practice: $(PRACTICE_EXERCISES)
@if [ -z "$(PRACTICE_EXERCISES)" ]; then \
echo "No practice exercises found."; \
fi

.PHONY: concept
concept: $(CONCEPT_EXERCISES)
@if [ -z "$(CONCEPT_EXERCISES)" ]; then \
echo "No concept exercises found."; \
fi

# Instantiate the macro for each practice exercise to create targets for each exercise.
$(foreach exercise,$(PRACTICE_EXERCISES),$(eval $(call setup_exercise,$(exercise),$(subst -,_,$(exercise)),practice,example)))
$(foreach exercise,$(CONCEPT_EXERCISES),$(eval $(call setup_exercise,$(exercise),$(subst -,_,$(exercise)),concept,exemplar)))

.PHONY: list-practice
list-practice:
@for exercise in $(PRACTICE_EXERCISES); do \
echo "$$exercise"; \
done

.PHONY: list-concept
list-concept:
@for exercise in $(CONCEPT_EXERCISES); do \
echo "$$exercise"; \
done

.PHONY: clean
clean:
rm -rf build

.PHONY: help
help:
@echo "Available targets:"
@echo " all - Build and test all practice exercises (default)"
@echo " <slug> - Build and test a specific exercise given by its slug"
@echo " clean - Remove all build artifacts"
@echo " <slug>.clean - Remove build artifacts of a specific exercise given by its slug"
@echo " list-practice - List all practice exercises"
@echo " list-concept - List all concept exercises"
@echo " help - Show this help message"