Commit graph

89 commits

Author SHA1 Message Date
Bharata B Rao
309b85548f
cpu: Convert cpu_index into a bitmap
Currently CPUState::cpu_index is monotonically increasing and a newly
created CPU always gets the next higher index. The next available
index is calculated by counting the existing number of CPUs. This is
fine as long as we only add CPUs, but there are architectures which
are starting to support CPU removal, too. For an architecture like PowerPC
which derives its CPU identifier (device tree ID) from cpu_index, the
existing logic of generating cpu_index values causes problems.

With the currently proposed method of handling vCPU removal by parking
the vCPU fd in QEMU
(Ref: http://lists.gnu.org/archive/html/qemu-devel/2015-02/msg02604.html),
generating cpu_index this way will not work for PowerPC.

This patch changes the way cpu_index is handed out by maintaining
a bit map of the CPUs that tracks both addition and removal of CPUs.

The CPU bitmap allocation logic is part of cpu_exec_init(), which is
called by instance_init routines of various CPU targets. Newly added
cpu_exec_exit() API handles the deallocation part and this routine is
called from generic CPU instance_finalize.

Note: This new CPU enumeration is for !CONFIG_USER_ONLY only.
CONFIG_USER_ONLY continues to have the old enumeration logic.

Backports commit b7bca7333411bd19c449147e8202ae6b0e4a8e09 from qemu
2018-03-21 08:06:07 -04:00
Igor Mammedov
87db6e033b
cpu: Use CPUClass->parse_features() as convertor to global properties
Currently CPUClass->parse_features() is used to parse -cpu
features string and set properties on created CPU instances.

But considering that features specified by -cpu apply to every
created CPU instance, it doesn't make sense to parse the same
features string for every CPU created. It also makes every target
that cares about parsing features string explicitly call
CPUClass->parse_features() parser, which gets in a way if we
consider using generic device_add for CPU hotplug as device_add
has not a clue about CPU specific hooks.

Turns out we can use global properties mechanism to set
properties on every created CPU instance for a given type. That
way it's possible to convert CPU features into a set of global
properties for CPU type specified by -cpu cpu_model and common
Device.device_post_init() will apply them to CPU of given type
automatically regardless whether it's manually created CPU or CPU
created with help of device_add.

Backports commits 62a48a2a5798425997152dea3fc48708f9116c04 and
f313369fdb78f849ecbbd8e5d88f01ddf38786c8 from qemu
2018-03-20 12:00:27 -04:00
Alexey Kardashevskiy
b90333a531
memory: Share special empty FlatView
This shares an cached empty FlatView among address spaces. The empty
FV is used every time when a root MR renders into a FV without memory
sections which happens when MR or its children are not enabled or
zero-sized. The empty_view is not NULL to keep the rest of memory
API intact; it also has a dispatch tree for the same reason.

On POWER8 with 255 CPUs, 255 virtio-net, 40 PCI bridges guest this halves
the amount of FlatView's in use (557 -> 260) and dispatch tables
(~800000 -> ~370000). In an unrelated experiment with 112 non-virtio
devices on x86 ("-M pc"), only 4 FlatViews are alive, and about ~2000
are created at startup.

Backports commit 092aa2fc65b7a35121616aad8f39d47b8f921618 from qemu
2018-03-11 22:34:28 -04:00
Alexey Kardashevskiy
f2c72dc278
memory: Share FlatView's and dispatch trees between address spaces
This allows sharing flat views between address spaces (AS) when
the same root memory region is used when creating a new address space.
This is done by walking through all ASes and caching one FlatView per
a physical root MR (i.e. not aliased).

This removes search for duplicates from address_space_init_shareable() as
FlatViews are shared elsewhere and keeping as::ref_count correct seems
an unnecessary and useless complication.

This should cause no change and memory use or boot time yet.

Backports commit 967dc9b1194a9281124b2e1ce67b6c3359a2138f from qemu
2018-03-11 22:05:44 -04:00
Richard Henderson
a58eb310eb
target/arm: Use helper_retaddr in stxp helpers
We use raw memory primitives along the !parallel_cpus paths in order to
simplify the endianness handling. Because of that, we did not benefit
from the generic changes to cpu_ldst_user_only_template.h.

The simplest fix is to manipulate helper_retaddr here.

Backports commit 3bdb5fcc9a08a9a47ce30c4e0c2d64c95190b49d from qemu
2018-03-05 13:48:28 -05:00
Richard Henderson
f76eb22a46
tcg: Record code_gen_buffer address for user-only memory helpers
When we handle a signal from a fault within a user-only memory helper,
we cannot cpu_restore_state with the PC found within the signal frame.
Use a TLS variable, helper_retaddr, to record the unwind start point
to find the faulting guest insn.

Backports commit ec603b5584fa71213ef8f324fe89e4b27cc9d2bc from qemu
2018-03-05 13:48:28 -05:00
Emilio G. Cota
8e58c67968
util: add cacheinfo
Add helpers to gather cache info from the host at init-time.

For now, only export the host's I/D cache line sizes, which we
will use to improve cache locality to avoid false sharing.

Backports commit b255b2c8a5484742606e8760870ba3e14d0c9605 from qemu
2018-03-03 16:58:28 -05:00
Alex Bennée
632b853761
tcg: remove global exit_request
There are now only two uses of the global exit_request left.

The first ensures we exit the run_loop when we first start to process
pending work and in the kick handler. This is just as easily done by
setting the first_cpu->exit_request flag.

The second use is in the round robin kick routine. The global
exit_request ensured every vCPU would set its local exit_request and
cause a full exit of the loop. Now the iothread isn't being held while
running we can just rely on the kick handler to push us out as intended.

We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
cause us to exit the main loop and process any IO requests that might
come along. As an cpu->exit_request may legitimately get squashed
while processing the EXCP_INTERRUPT exception we also check
cpu->queued_work_first to ensure queued work is expedited as soon as
possible.

Backports commit e5143e30fb87fbf179029387f83f98a5a9b27f19 from qemu
2018-03-02 09:38:08 -05:00
Alex Bennée
4d90497d14
tcg: rename tcg_current_cpu to tcg_current_rr_cpu
..and make the definition local to cpus. In preparation for MTTCG the
concept of a global tcg_current_cpu will no longer make sense. However
we still need to keep track of it in the single-threaded case to be able
to exit quickly when required.

qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.

For the time being the setting of the global exit_request remains.

Backports commit 791158d93b27f22a17c2ada06621831d54f09a2c from qemu

Also atomically sets the unicorn equivalents
2018-03-02 09:28:51 -05:00
KONRAD Frederic
c5730ff194
tcg: add options for enabling MTTCG
We know there will be cases where MTTCG won't work until additional work
is done in the front/back ends to support. It will however be useful to
be able to turn it on.

As a result MTTCG will default to off unless the combination is
supported. However the user can turn it on for the sake of testing.

Backports commit 8d4e9146b3568022ea5730d92841345d41275d66 from qemu
2018-03-02 09:25:01 -05:00
Richard Henderson
e35aacd5ae
tcg: Add EXCP_ATOMIC
When we cannot emulate an atomic operation within a parallel
context, this exception allows us to stop the world and try
again in a serial context.

Backports commit fdbc2b5722f6092e47181a947c90fd4bdcc1c121 from qemu

Also backports parts of commit 02d57ea115b7669f588371c86484a2e8ebc369be
2018-02-27 11:57:58 -05:00
Peter Maydell
db8b0a82b1
cpu: Support a target CPU having a variable page size
Support target CPUs having a page size which isn't knownn
at compile time. To use this, the CPU implementation should:
* define TARGET_PAGE_BITS_VARY
* not define TARGET_PAGE_BITS
* define TARGET_PAGE_BITS_MIN to the smallest value it
might possibly want for TARGET_PAGE_BITS
* call set_preferred_target_page_bits() in its realize
function to indicate the actual preferred target page
size for the CPU (and report any error from it)

In CONFIG_USER_ONLY, the CPU implementation should continue
to define TARGET_PAGE_BITS appropriately for the guest
OS page size.

Machines which want to take advantage of having the page
size something larger than TARGET_PAGE_BITS_MIN must
set the MachineClass minimum_page_bits field to a value
which they guarantee will be no greater than the preferred
page size for any CPU they create.

Note that changing the target page size by setting
minimum_page_bits is a migration compatibility break
for that machine.

For debugging purposes, attempts to use TARGET_PAGE_SIZE
before it has been finally confirmed will assert.

Backports commit 20bccb82ff3ea09bcb7c4ee226d3160cab15f7da from qemu
2018-02-26 12:29:08 -05:00
Vijaya Kumar K
a7229cc08a
translate-all.c: Compute L1 page table properties at runtime
Remove L1 page mapping table properties computing
statically using macros which is dependent on
TARGET_PAGE_BITS. Drop macros V_L1_SIZE, V_L1_SHIFT,
V_L1_BITS macros and replace with variables which are
computed at early stage of VM boot.

Removing dependency can help to make TARGET_PAGE_BITS
dynamic.

Backports commit 66ec9f49399f0a9fa13ee77c472caba0de2773fc from qemu
2018-02-26 11:46:58 -05:00
Peter Lieven
799bf1c3a5
exec: avoid realloc in phys_map_node_reserve
this is the first step in reducing the brk heap fragmentation
created by the map->nodes memory allocation. Since the introduction
of RCU the freeing of the PhysPageMaps is delayed so that sometimes
several hundred are allocated at the same time.

Even worse the memory for map->nodes is allocated and shortly
afterwards reallocated. Since the number of nodes it grows
to in the end is the same for all PhysPageMaps remember this value
and at least avoid the reallocation.

The large number of simultaneous allocations (about 450 x 70kB in
my configuration) has to be addressed later.

Backports commit 101420b886eec36990419bc9ed5b503622af8a0d from qemu
2018-02-25 19:32:40 -05:00
Lioncash
c658126845
include: Move RAMList to ramlist.h
Moves the struct back into qemu's headers
2018-02-20 08:47:51 -05:00
Lioncash
b2a8355f8d
target-i386: Correct unicorn macro 2018-02-19 01:00:47 -05:00
Paolo Bonzini
96e5a7ced3
tcg: introduce tcg_current_cpu
This is already useful on Windows in order to remove tls.h, because
accesses to current_cpu are done from a different thread on that
platform. It will be used on POSIX platforms as soon TCG stops using
signals to interrupt the execution of translated code.

Backports commit 9373e63297c43752f9cf085feb7f5aed57d959f8 from qemu
2018-02-17 15:23:49 -05:00
Nguyen Anh Quynh
fe466d003a callback to count number of instructions in uc_emu_start() should be executed first. fix #727 2017-06-16 13:22:38 +08:00
bulaza
4b9efdc986 Adding INSN hook checks for x86 (#833)
* adding INSN hook checking for x86

* tabs to spaces

* need to return bool not uc_err

* fixed conditional after switching to bool
2017-05-14 00:16:17 +07:00
Nguyen Anh Quynh
e917c9de10 Merge branch 'master' into msvc2 2017-04-21 01:17:00 +08:00
Nguyen Anh Quynh
3b6779479e cleanup uc_priv.h 2017-03-30 15:59:13 +08:00
Nguyen Anh Quynh
094ca80092 fix conflicts 2017-03-30 12:23:24 +08:00
zhangwm
d8fe34a2e8 armeb: Add support for ARM big endian. 2017-03-13 22:32:44 +08:00
Nguyen Anh Quynh
c01dcf0a14 fix merge conflicts 2017-03-10 21:04:33 +08:00
Ahmed Samy
02e6c14e12 x86: add MSR API via reg API (#755)
Writing / reading to model specific registers should be as easy as
calling a function, it's a bit stupid to write shell code and run them
just to write/read to a MSR, and even worse, you need more than just a
shellcode to read...

So, add a special register ID called UC_X86_REG_MSR, which should be
passed to uc_reg_write()/uc_reg_read() as the register ID, and then a
data structure which is uc_x86_msr (12 bytes), as the value (always), where:
	Byte	Value		Size
	0	MSR ID		4
	4       MSR val		8
2017-02-24 21:37:19 +08:00
xorstream
fac6a66860 platform.h move #3 2017-01-21 00:13:21 +11:00
xorstream
b0ae2138fb Merge remote-tracking branch 'unicorn-engine/master' into msvc_native 2017-01-20 22:37:51 +11:00
Nguyen Anh Quynh
c6de7930c9 remove mutex code 2017-01-20 15:44:03 +08:00
Nguyen Anh Quynh
42771848d6 no more spinlock 2017-01-20 14:57:33 +08:00
xorstream
1aeaf5c40d This code should now build the x86_x64-softmmu part 2. 2017-01-19 22:50:28 +11:00
Elton G
47150b6df3 reg_read and reg_write now work with registers W0 through W30 in Aarch64 (#716)
* reg_read and reg_write now work with registers W0 through W30 in Aarch64 emulaton

* Added a regress test for the ARM64 reg_read and reg_write on 32-bit registers (W0-W30)
Added a new macro in uc_priv.h (WRITE_DWORD_TO_QWORD), in order to write to the lower 32 bits of a 64 bit value without overwriting the whole value when using reg_write

* Fixed WRITE_DWORD macro

reg_write would zero out the high order bits when writing to 32 bit registers

e.g. uc.reg_write(UC_X86_REG_EAX, 0) would also set register RAX to zero
2017-01-15 20:13:35 +08:00
Nguyen Anh Quynh
52cb0ba78e cleanup more synchronization code 2017-01-09 14:05:39 +08:00
cojocar
428cb83060 Support for MCLASS ARM cpu (Cortex-M3) (#700)
Support for Cortex-M ARM CPU already exists in Qemu. This patch just
exposes a "cortex-m3" CPU.

"uc_open(UC_ARCH_ARM, UC_MODE_THUMB | UC_MODE_MCLASS, &uc);"
Instantiates a CPU with this feature on.

Signed-off-by: Lucian Cojocar <lucian@cojocar.com>
2016-12-27 22:49:06 +08:00
Nguyen Anh Quynh
4083b87032 add new hook type UC_HOOK_MEM_READ_AFTER, adapted from PR #399 by @farmdve. updated all bindings, except Ruby & Haskell 2016-10-22 11:19:55 +08:00
Andrew Dutcher
80f35d3b2b remove safety checks, for some reason 2016-10-11 13:07:14 -07:00
Andrew Dutcher
ea54204952 Tweak some names in a few places, encapsulate the uc_context struct to hide it from users for some reason 2016-10-10 14:04:51 -07:00
Ryan Hileman
cb615fdba7 remove uc->cpus 2016-09-23 07:38:21 -07:00
Nguyen Anh Quynh
69d976375e Merge branch 'fix/self_modifying' of https://github.com/rhelmot/unicorn into rhelmot-fix/self_modifying 2016-08-30 21:20:22 +08:00
Nguyen Anh Quynh
8b030ae51a fix for issue #523 2016-08-27 21:49:11 +08:00
Andrew Dutcher
97b10da133 Undo the disaster that was the patch to unicorn github issue #266 and fix it correctly. makes normal self-modifying code work. 2016-08-09 19:35:20 -07:00
Nguyen Anh Quynh
3a742fb6f6 fix conflicts when merging no-thread to master 2016-04-23 10:06:57 +08:00
Nguyen Anh Quynh
cc6cbc5cf7 Merge branch 'memleak' into m2 2016-04-18 12:48:13 +08:00
Nguyen Anh Quynh
47a7bb3c9f Merge branch 'smaller_nothreads' of https://github.com/cseagle/unicorn into cseagle-smaller_nothreads 2016-04-17 23:37:06 +08:00
Ryan Hileman
acd88856e1 add batched reg access 2016-04-04 20:51:38 -07:00
Chris Eagle
9467254fc0 strip out per cpu thread code 2016-03-25 17:24:28 -07:00
Nguyen Anh Quynh
fb1ebac000 Merge branch 'master' into m1 2016-03-09 15:13:42 +08:00
Hiroyuki UEKAWA
c5888e5670 move macros in qemu/target-*/unicorn*.c to uc_priv.h 2016-03-02 12:43:02 +09:00
Nguyen Anh Quynh
20b01a6933 fix merge conflict 2016-02-01 12:08:38 +08:00
Nguyen Anh Quynh
e750a4e97c when uc_mem_exec() remove EXE permission, quit current TB & continue emulating with TB flushed. this fixes issue in PR #378 2016-01-28 00:56:55 +08:00
Nguyen Anh Quynh
48ab148d1c Merge branch 'hook' 2016-01-26 22:52:29 +08:00